00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2442 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3703 00:00:00.002 originally caused by: 00:00:00.005 Started by timer 00:00:00.005 Started by timer 00:00:00.123 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.124 The recommended git tool is: git 00:00:00.124 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.171 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.917 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.928 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.938 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.938 > git config core.sparsecheckout # timeout=10 00:00:06.948 > git read-tree -mu HEAD # timeout=10 00:00:06.965 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.987 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.987 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.127 [Pipeline] Start of Pipeline 00:00:07.141 [Pipeline] library 00:00:07.143 Loading library shm_lib@master 00:00:07.143 Library shm_lib@master is cached. Copying from home. 00:00:07.158 [Pipeline] node 00:00:07.170 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:07.171 [Pipeline] { 00:00:07.202 [Pipeline] catchError 00:00:07.203 [Pipeline] { 00:00:07.216 [Pipeline] wrap 00:00:07.225 [Pipeline] { 00:00:07.233 [Pipeline] stage 00:00:07.235 [Pipeline] { (Prologue) 00:00:07.252 [Pipeline] echo 00:00:07.253 Node: VM-host-SM0 00:00:07.259 [Pipeline] cleanWs 00:00:07.269 [WS-CLEANUP] Deleting project workspace... 00:00:07.269 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.274 [WS-CLEANUP] done 00:00:07.466 [Pipeline] setCustomBuildProperty 00:00:07.558 [Pipeline] httpRequest 00:00:07.914 [Pipeline] echo 00:00:07.916 Sorcerer 10.211.164.101 is alive 00:00:07.925 [Pipeline] retry 00:00:07.926 [Pipeline] { 00:00:07.938 [Pipeline] httpRequest 00:00:07.942 HttpMethod: GET 00:00:07.943 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.943 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.944 Response Code: HTTP/1.1 200 OK 00:00:07.944 Success: Status code 200 is in the accepted range: 200,404 00:00:07.945 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.838 [Pipeline] } 00:00:08.857 [Pipeline] // retry 00:00:08.864 [Pipeline] sh 00:00:09.143 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.162 [Pipeline] httpRequest 00:00:09.503 [Pipeline] echo 00:00:09.504 Sorcerer 10.211.164.101 is alive 00:00:09.513 [Pipeline] retry 00:00:09.515 [Pipeline] { 00:00:09.528 [Pipeline] httpRequest 00:00:09.532 HttpMethod: GET 00:00:09.532 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.533 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.549 Response Code: HTTP/1.1 200 OK 00:00:09.549 Success: Status code 200 is in the accepted range: 200,404 00:00:09.550 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:03:53.018 [Pipeline] } 00:03:53.040 [Pipeline] // retry 00:03:53.049 [Pipeline] sh 00:03:53.352 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:03:56.652 [Pipeline] sh 00:03:56.933 + git -C spdk log --oneline -n5 00:03:56.933 c13c99a5e test: Various fixes for Fedora40 00:03:56.933 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:03:56.933 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:03:56.933 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:03:56.933 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:03:56.951 [Pipeline] writeFile 00:03:56.967 [Pipeline] sh 00:03:57.249 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:57.262 [Pipeline] sh 00:03:57.543 + cat autorun-spdk.conf 00:03:57.544 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:57.544 SPDK_TEST_NVMF=1 00:03:57.544 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:57.544 SPDK_TEST_VFIOUSER=1 00:03:57.544 SPDK_TEST_USDT=1 00:03:57.544 SPDK_RUN_UBSAN=1 00:03:57.544 SPDK_TEST_NVMF_MDNS=1 00:03:57.544 NET_TYPE=virt 00:03:57.544 SPDK_JSONRPC_GO_CLIENT=1 00:03:57.544 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:57.551 RUN_NIGHTLY=1 00:03:57.553 [Pipeline] } 00:03:57.566 [Pipeline] // stage 00:03:57.581 [Pipeline] stage 00:03:57.583 [Pipeline] { (Run VM) 00:03:57.596 [Pipeline] sh 00:03:57.878 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:57.878 + echo 'Start stage prepare_nvme.sh' 00:03:57.878 Start stage prepare_nvme.sh 00:03:57.878 + [[ -n 1 ]] 00:03:57.878 + disk_prefix=ex1 00:03:57.878 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:03:57.878 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:03:57.878 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:03:57.878 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:57.878 ++ SPDK_TEST_NVMF=1 00:03:57.878 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:57.878 ++ SPDK_TEST_VFIOUSER=1 00:03:57.878 ++ SPDK_TEST_USDT=1 00:03:57.878 ++ SPDK_RUN_UBSAN=1 00:03:57.878 ++ SPDK_TEST_NVMF_MDNS=1 00:03:57.878 ++ NET_TYPE=virt 00:03:57.878 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:57.878 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:57.878 ++ RUN_NIGHTLY=1 00:03:57.878 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:57.878 + nvme_files=() 00:03:57.878 + declare -A nvme_files 00:03:57.878 + backend_dir=/var/lib/libvirt/images/backends 00:03:57.878 + nvme_files['nvme.img']=5G 00:03:57.878 + nvme_files['nvme-cmb.img']=5G 00:03:57.878 + nvme_files['nvme-multi0.img']=4G 00:03:57.878 + nvme_files['nvme-multi1.img']=4G 00:03:57.878 + nvme_files['nvme-multi2.img']=4G 00:03:57.878 + nvme_files['nvme-openstack.img']=8G 00:03:57.878 + nvme_files['nvme-zns.img']=5G 00:03:57.878 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:57.878 + (( SPDK_TEST_FTL == 1 )) 00:03:57.878 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:57.878 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:57.878 + for nvme in "${!nvme_files[@]}" 00:03:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:03:57.878 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:57.878 + for nvme in "${!nvme_files[@]}" 00:03:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:03:57.878 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:57.878 + for nvme in "${!nvme_files[@]}" 00:03:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:03:57.878 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:57.878 + for nvme in "${!nvme_files[@]}" 00:03:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:03:57.878 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:57.878 + for nvme in "${!nvme_files[@]}" 00:03:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:03:57.878 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:57.878 + for nvme in "${!nvme_files[@]}" 00:03:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:03:58.137 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:58.137 + for nvme in "${!nvme_files[@]}" 00:03:58.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:03:58.137 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:58.137 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:03:58.137 + echo 'End stage prepare_nvme.sh' 00:03:58.137 End stage prepare_nvme.sh 00:03:58.149 [Pipeline] sh 00:03:58.429 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:58.430 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:03:58.430 00:03:58.430 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:03:58.430 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:03:58.430 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:58.430 HELP=0 00:03:58.430 DRY_RUN=0 00:03:58.430 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:03:58.430 NVME_DISKS_TYPE=nvme,nvme, 00:03:58.430 NVME_AUTO_CREATE=0 00:03:58.430 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:03:58.430 NVME_CMB=,, 00:03:58.430 NVME_PMR=,, 00:03:58.430 NVME_ZNS=,, 00:03:58.430 NVME_MS=,, 00:03:58.430 NVME_FDP=,, 00:03:58.430 SPDK_VAGRANT_DISTRO=fedora39 00:03:58.430 SPDK_VAGRANT_VMCPU=10 00:03:58.430 SPDK_VAGRANT_VMRAM=12288 00:03:58.430 SPDK_VAGRANT_PROVIDER=libvirt 00:03:58.430 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:58.430 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:58.430 SPDK_OPENSTACK_NETWORK=0 00:03:58.430 VAGRANT_PACKAGE_BOX=0 00:03:58.430 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:58.430 FORCE_DISTRO=true 00:03:58.430 VAGRANT_BOX_VERSION= 00:03:58.430 EXTRA_VAGRANTFILES= 00:03:58.430 NIC_MODEL=e1000 00:03:58.430 00:03:58.430 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:03:58.430 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:01.715 Bringing machine 'default' up with 'libvirt' provider... 00:04:02.281 ==> default: Creating image (snapshot of base box volume). 00:04:02.540 ==> default: Creating domain with the following settings... 00:04:02.540 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733494688_210bccaffd0b53bb4a9a 00:04:02.540 ==> default: -- Domain type: kvm 00:04:02.540 ==> default: -- Cpus: 10 00:04:02.540 ==> default: -- Feature: acpi 00:04:02.540 ==> default: -- Feature: apic 00:04:02.540 ==> default: -- Feature: pae 00:04:02.540 ==> default: -- Memory: 12288M 00:04:02.540 ==> default: -- Memory Backing: hugepages: 00:04:02.540 ==> default: -- Management MAC: 00:04:02.540 ==> default: -- Loader: 00:04:02.540 ==> default: -- Nvram: 00:04:02.540 ==> default: -- Base box: spdk/fedora39 00:04:02.540 ==> default: -- Storage pool: default 00:04:02.540 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733494688_210bccaffd0b53bb4a9a.img (20G) 00:04:02.540 ==> default: -- Volume Cache: default 00:04:02.540 ==> default: -- Kernel: 00:04:02.540 ==> default: -- Initrd: 00:04:02.540 ==> default: -- Graphics Type: vnc 00:04:02.540 ==> default: -- Graphics Port: -1 00:04:02.540 ==> default: -- Graphics IP: 127.0.0.1 00:04:02.540 ==> default: -- Graphics Password: Not defined 00:04:02.540 ==> default: -- Video Type: cirrus 00:04:02.540 ==> default: -- Video VRAM: 9216 00:04:02.540 ==> default: -- Sound Type: 00:04:02.540 ==> default: -- Keymap: en-us 00:04:02.540 ==> default: -- TPM Path: 00:04:02.540 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:02.540 ==> default: -- Command line args: 00:04:02.540 ==> default: -> value=-device, 00:04:02.540 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:04:02.540 ==> default: -> value=-drive, 00:04:02.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:04:02.540 ==> default: -> value=-device, 00:04:02.540 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:02.540 ==> default: -> value=-device, 00:04:02.540 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:04:02.540 ==> default: -> value=-drive, 00:04:02.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:02.540 ==> default: -> value=-device, 00:04:02.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:02.540 ==> default: -> value=-drive, 00:04:02.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:02.540 ==> default: -> value=-device, 00:04:02.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:02.540 ==> default: -> value=-drive, 00:04:02.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:02.540 ==> default: -> value=-device, 00:04:02.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:02.798 ==> default: Creating shared folders metadata... 00:04:02.798 ==> default: Starting domain. 00:04:04.700 ==> default: Waiting for domain to get an IP address... 00:04:22.792 ==> default: Waiting for SSH to become available... 00:04:22.792 ==> default: Configuring and enabling network interfaces... 00:04:25.325 default: SSH address: 192.168.121.245:22 00:04:25.325 default: SSH username: vagrant 00:04:25.325 default: SSH auth method: private key 00:04:27.228 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:35.356 ==> default: Mounting SSHFS shared folder... 00:04:36.321 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:36.321 ==> default: Checking Mount.. 00:04:37.695 ==> default: Folder Successfully Mounted! 00:04:37.695 ==> default: Running provisioner: file... 00:04:38.628 default: ~/.gitconfig => .gitconfig 00:04:38.886 00:04:38.886 SUCCESS! 00:04:38.886 00:04:38.886 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:38.886 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:38.886 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:38.886 00:04:38.894 [Pipeline] } 00:04:38.905 [Pipeline] // stage 00:04:38.913 [Pipeline] dir 00:04:38.913 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:04:38.915 [Pipeline] { 00:04:38.925 [Pipeline] catchError 00:04:38.927 [Pipeline] { 00:04:38.937 [Pipeline] sh 00:04:39.212 + vagrant ssh-config --host vagrant 00:04:39.212 + sed -ne /^Host/,$p 00:04:39.212 + tee ssh_conf 00:04:42.496 Host vagrant 00:04:42.496 HostName 192.168.121.245 00:04:42.496 User vagrant 00:04:42.496 Port 22 00:04:42.496 UserKnownHostsFile /dev/null 00:04:42.496 StrictHostKeyChecking no 00:04:42.496 PasswordAuthentication no 00:04:42.496 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:42.496 IdentitiesOnly yes 00:04:42.496 LogLevel FATAL 00:04:42.496 ForwardAgent yes 00:04:42.496 ForwardX11 yes 00:04:42.496 00:04:42.510 [Pipeline] withEnv 00:04:42.513 [Pipeline] { 00:04:42.530 [Pipeline] sh 00:04:42.916 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:42.917 source /etc/os-release 00:04:42.917 [[ -e /image.version ]] && img=$(< /image.version) 00:04:42.917 # Minimal, systemd-like check. 00:04:42.917 if [[ -e /.dockerenv ]]; then 00:04:42.917 # Clear garbage from the node's name: 00:04:42.917 # agt-er_autotest_547-896 -> autotest_547-896 00:04:42.917 # $HOSTNAME is the actual container id 00:04:42.917 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:42.917 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:42.917 # We can assume this is a mount from a host where container is running, 00:04:42.917 # so fetch its hostname to easily identify the target swarm worker. 00:04:42.917 container="$(< /etc/hostname) ($agent)" 00:04:42.917 else 00:04:42.917 # Fallback 00:04:42.917 container=$agent 00:04:42.917 fi 00:04:42.917 fi 00:04:42.917 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:42.917 00:04:42.957 [Pipeline] } 00:04:42.972 [Pipeline] // withEnv 00:04:42.981 [Pipeline] setCustomBuildProperty 00:04:42.997 [Pipeline] stage 00:04:42.999 [Pipeline] { (Tests) 00:04:43.017 [Pipeline] sh 00:04:43.301 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:43.572 [Pipeline] sh 00:04:43.850 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:44.121 [Pipeline] timeout 00:04:44.122 Timeout set to expire in 1 hr 0 min 00:04:44.124 [Pipeline] { 00:04:44.137 [Pipeline] sh 00:04:44.415 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:44.982 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:04:44.994 [Pipeline] sh 00:04:45.271 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:45.542 [Pipeline] sh 00:04:45.820 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:46.093 [Pipeline] sh 00:04:46.373 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:04:46.632 ++ readlink -f spdk_repo 00:04:46.632 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:46.632 + [[ -n /home/vagrant/spdk_repo ]] 00:04:46.632 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:46.632 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:46.632 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:46.632 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:46.632 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:46.632 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:04:46.632 + cd /home/vagrant/spdk_repo 00:04:46.632 + source /etc/os-release 00:04:46.632 ++ NAME='Fedora Linux' 00:04:46.632 ++ VERSION='39 (Cloud Edition)' 00:04:46.632 ++ ID=fedora 00:04:46.632 ++ VERSION_ID=39 00:04:46.632 ++ VERSION_CODENAME= 00:04:46.632 ++ PLATFORM_ID=platform:f39 00:04:46.632 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:46.632 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:46.632 ++ LOGO=fedora-logo-icon 00:04:46.632 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:46.632 ++ HOME_URL=https://fedoraproject.org/ 00:04:46.632 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:46.632 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:46.632 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:46.632 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:46.632 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:46.632 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:46.632 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:46.632 ++ SUPPORT_END=2024-11-12 00:04:46.632 ++ VARIANT='Cloud Edition' 00:04:46.632 ++ VARIANT_ID=cloud 00:04:46.632 + uname -a 00:04:46.632 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:46.632 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:46.632 Hugepages 00:04:46.632 node hugesize free / total 00:04:46.632 node0 1048576kB 0 / 0 00:04:46.632 node0 2048kB 0 / 0 00:04:46.632 00:04:46.632 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:46.632 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:46.632 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:46.632 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:46.632 + rm -f /tmp/spdk-ld-path 00:04:46.632 + source autorun-spdk.conf 00:04:46.632 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:46.632 ++ SPDK_TEST_NVMF=1 00:04:46.632 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:46.632 ++ SPDK_TEST_VFIOUSER=1 00:04:46.632 ++ SPDK_TEST_USDT=1 00:04:46.632 ++ SPDK_RUN_UBSAN=1 00:04:46.632 ++ SPDK_TEST_NVMF_MDNS=1 00:04:46.632 ++ NET_TYPE=virt 00:04:46.632 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:46.632 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:46.632 ++ RUN_NIGHTLY=1 00:04:46.632 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:46.632 + [[ -n '' ]] 00:04:46.632 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:46.632 + for M in /var/spdk/build-*-manifest.txt 00:04:46.632 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:46.632 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:46.890 + for M in /var/spdk/build-*-manifest.txt 00:04:46.890 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:46.890 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:46.890 + for M in /var/spdk/build-*-manifest.txt 00:04:46.890 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:46.890 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:46.890 ++ uname 00:04:46.890 + [[ Linux == \L\i\n\u\x ]] 00:04:46.890 + sudo dmesg -T 00:04:46.890 + sudo dmesg --clear 00:04:46.890 + dmesg_pid=5238 00:04:46.890 + [[ Fedora Linux == FreeBSD ]] 00:04:46.890 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:46.890 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:46.890 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:46.890 + [[ -x /usr/src/fio-static/fio ]] 00:04:46.890 + sudo dmesg -Tw 00:04:46.890 + export FIO_BIN=/usr/src/fio-static/fio 00:04:46.890 + FIO_BIN=/usr/src/fio-static/fio 00:04:46.890 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:46.890 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:46.890 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:46.890 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:46.890 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:46.890 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:46.890 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:46.890 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:46.890 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:46.890 Test configuration: 00:04:46.890 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:46.890 SPDK_TEST_NVMF=1 00:04:46.890 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:46.890 SPDK_TEST_VFIOUSER=1 00:04:46.890 SPDK_TEST_USDT=1 00:04:46.890 SPDK_RUN_UBSAN=1 00:04:46.890 SPDK_TEST_NVMF_MDNS=1 00:04:46.890 NET_TYPE=virt 00:04:46.890 SPDK_JSONRPC_GO_CLIENT=1 00:04:46.890 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:46.890 RUN_NIGHTLY=1 14:18:53 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:04:46.890 14:18:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.890 14:18:53 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:46.890 14:18:53 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.890 14:18:53 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.890 14:18:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.890 14:18:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.890 14:18:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.890 14:18:53 -- paths/export.sh@5 -- $ export PATH 00:04:46.890 14:18:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.890 14:18:53 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:46.890 14:18:53 -- common/autobuild_common.sh@440 -- $ date +%s 00:04:46.890 14:18:53 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733494733.XXXXXX 00:04:46.890 14:18:53 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733494733.sGJHTI 00:04:46.890 14:18:53 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:04:46.890 14:18:53 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:04:46.890 14:18:53 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:46.890 14:18:53 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:46.890 14:18:53 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:46.890 14:18:53 -- common/autobuild_common.sh@456 -- $ get_config_params 00:04:46.890 14:18:53 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:04:46.890 14:18:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.890 14:18:53 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:04:46.891 14:18:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:46.891 14:18:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:46.891 14:18:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:46.891 14:18:53 -- spdk/autobuild.sh@16 -- $ date -u 00:04:46.891 Fri Dec 6 02:18:53 PM UTC 2024 00:04:46.891 14:18:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:46.891 LTS-67-gc13c99a5e 00:04:46.891 14:18:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:46.891 14:18:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:46.891 14:18:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:46.891 14:18:53 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:46.891 14:18:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:46.891 14:18:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.891 ************************************ 00:04:46.891 START TEST ubsan 00:04:46.891 ************************************ 00:04:46.891 using ubsan 00:04:46.891 14:18:53 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:04:46.891 00:04:46.891 real 0m0.000s 00:04:46.891 user 0m0.000s 00:04:46.891 sys 0m0.000s 00:04:46.891 14:18:53 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:46.891 14:18:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.891 ************************************ 00:04:46.891 END TEST ubsan 00:04:46.891 ************************************ 00:04:47.149 14:18:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:47.149 14:18:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:47.149 14:18:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:47.149 14:18:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:47.149 14:18:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:47.149 14:18:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:47.149 14:18:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:47.149 14:18:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:47.149 14:18:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:04:47.406 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:47.406 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:47.664 Using 'verbs' RDMA provider 00:05:03.104 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:05:13.071 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:05:13.330 go version go1.21.1 linux/amd64 00:05:13.589 Creating mk/config.mk...done. 00:05:13.589 Creating mk/cc.flags.mk...done. 00:05:13.589 Type 'make' to build. 00:05:13.589 14:19:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:05:13.589 14:19:20 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:05:13.589 14:19:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:13.589 14:19:20 -- common/autotest_common.sh@10 -- $ set +x 00:05:13.589 ************************************ 00:05:13.589 START TEST make 00:05:13.589 ************************************ 00:05:13.589 14:19:20 -- common/autotest_common.sh@1114 -- $ make -j10 00:05:13.847 make[1]: Nothing to be done for 'all'. 00:05:15.764 The Meson build system 00:05:15.764 Version: 1.5.0 00:05:15.764 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:05:15.764 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:05:15.764 Build type: native build 00:05:15.764 Project name: libvfio-user 00:05:15.764 Project version: 0.0.1 00:05:15.764 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:15.764 C linker for the host machine: cc ld.bfd 2.40-14 00:05:15.764 Host machine cpu family: x86_64 00:05:15.764 Host machine cpu: x86_64 00:05:15.764 Run-time dependency threads found: YES 00:05:15.764 Library dl found: YES 00:05:15.764 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:15.764 Run-time dependency json-c found: YES 0.17 00:05:15.764 Run-time dependency cmocka found: YES 1.1.7 00:05:15.764 Program pytest-3 found: NO 00:05:15.764 Program flake8 found: NO 00:05:15.764 Program misspell-fixer found: NO 00:05:15.764 Program restructuredtext-lint found: NO 00:05:15.764 Program valgrind found: YES (/usr/bin/valgrind) 00:05:15.764 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:15.764 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:15.764 Compiler for C supports arguments -Wwrite-strings: YES 00:05:15.764 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:15.764 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:05:15.764 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:05:15.764 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:15.764 Build targets in project: 8 00:05:15.764 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:15.764 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:15.764 00:05:15.764 libvfio-user 0.0.1 00:05:15.764 00:05:15.764 User defined options 00:05:15.764 buildtype : debug 00:05:15.764 default_library: shared 00:05:15.764 libdir : /usr/local/lib 00:05:15.764 00:05:15.764 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:16.023 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:05:16.023 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:16.023 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:16.023 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:16.023 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:16.023 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:16.281 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:16.281 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:16.281 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:16.281 [9/37] Compiling C object samples/null.p/null.c.o 00:05:16.281 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:16.281 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:16.281 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:16.281 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:16.281 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:16.281 [15/37] Compiling C object samples/client.p/client.c.o 00:05:16.281 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:16.539 [17/37] Compiling C object samples/server.p/server.c.o 00:05:16.539 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:16.539 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:16.539 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:16.539 [21/37] Linking target samples/client 00:05:16.539 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:16.539 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:16.539 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:16.539 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:16.539 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:16.539 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:16.539 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:05:16.539 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:16.539 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:16.797 [31/37] Linking target test/unit_tests 00:05:16.797 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:16.797 [33/37] Linking target samples/null 00:05:16.797 [34/37] Linking target samples/gpio-pci-idio-16 00:05:16.797 [35/37] Linking target samples/server 00:05:16.797 [36/37] Linking target samples/lspci 00:05:16.797 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:16.797 INFO: autodetecting backend as ninja 00:05:16.797 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:05:16.797 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:05:17.362 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:05:17.362 ninja: no work to do. 00:05:27.338 The Meson build system 00:05:27.338 Version: 1.5.0 00:05:27.338 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:27.338 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:27.338 Build type: native build 00:05:27.338 Program cat found: YES (/usr/bin/cat) 00:05:27.338 Project name: DPDK 00:05:27.338 Project version: 23.11.0 00:05:27.338 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:27.338 C linker for the host machine: cc ld.bfd 2.40-14 00:05:27.338 Host machine cpu family: x86_64 00:05:27.338 Host machine cpu: x86_64 00:05:27.338 Message: ## Building in Developer Mode ## 00:05:27.338 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:27.338 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:27.338 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:27.338 Program python3 found: YES (/usr/bin/python3) 00:05:27.338 Program cat found: YES (/usr/bin/cat) 00:05:27.338 Compiler for C supports arguments -march=native: YES 00:05:27.338 Checking for size of "void *" : 8 00:05:27.338 Checking for size of "void *" : 8 (cached) 00:05:27.338 Library m found: YES 00:05:27.338 Library numa found: YES 00:05:27.338 Has header "numaif.h" : YES 00:05:27.338 Library fdt found: NO 00:05:27.338 Library execinfo found: NO 00:05:27.338 Has header "execinfo.h" : YES 00:05:27.338 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:27.338 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:27.338 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:27.338 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:27.338 Run-time dependency openssl found: YES 3.1.1 00:05:27.338 Run-time dependency libpcap found: YES 1.10.4 00:05:27.338 Has header "pcap.h" with dependency libpcap: YES 00:05:27.338 Compiler for C supports arguments -Wcast-qual: YES 00:05:27.338 Compiler for C supports arguments -Wdeprecated: YES 00:05:27.338 Compiler for C supports arguments -Wformat: YES 00:05:27.338 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:27.338 Compiler for C supports arguments -Wformat-security: NO 00:05:27.338 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:27.338 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:27.338 Compiler for C supports arguments -Wnested-externs: YES 00:05:27.338 Compiler for C supports arguments -Wold-style-definition: YES 00:05:27.338 Compiler for C supports arguments -Wpointer-arith: YES 00:05:27.338 Compiler for C supports arguments -Wsign-compare: YES 00:05:27.338 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:27.338 Compiler for C supports arguments -Wundef: YES 00:05:27.338 Compiler for C supports arguments -Wwrite-strings: YES 00:05:27.338 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:27.338 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:27.338 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:27.338 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:27.338 Program objdump found: YES (/usr/bin/objdump) 00:05:27.338 Compiler for C supports arguments -mavx512f: YES 00:05:27.338 Checking if "AVX512 checking" compiles: YES 00:05:27.338 Fetching value of define "__SSE4_2__" : 1 00:05:27.338 Fetching value of define "__AES__" : 1 00:05:27.338 Fetching value of define "__AVX__" : 1 00:05:27.338 Fetching value of define "__AVX2__" : 1 00:05:27.338 Fetching value of define "__AVX512BW__" : (undefined) 00:05:27.338 Fetching value of define "__AVX512CD__" : (undefined) 00:05:27.338 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:27.338 Fetching value of define "__AVX512F__" : (undefined) 00:05:27.338 Fetching value of define "__AVX512VL__" : (undefined) 00:05:27.338 Fetching value of define "__PCLMUL__" : 1 00:05:27.338 Fetching value of define "__RDRND__" : 1 00:05:27.338 Fetching value of define "__RDSEED__" : 1 00:05:27.338 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:27.338 Fetching value of define "__znver1__" : (undefined) 00:05:27.338 Fetching value of define "__znver2__" : (undefined) 00:05:27.338 Fetching value of define "__znver3__" : (undefined) 00:05:27.338 Fetching value of define "__znver4__" : (undefined) 00:05:27.338 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:27.338 Message: lib/log: Defining dependency "log" 00:05:27.338 Message: lib/kvargs: Defining dependency "kvargs" 00:05:27.338 Message: lib/telemetry: Defining dependency "telemetry" 00:05:27.338 Checking for function "getentropy" : NO 00:05:27.338 Message: lib/eal: Defining dependency "eal" 00:05:27.338 Message: lib/ring: Defining dependency "ring" 00:05:27.338 Message: lib/rcu: Defining dependency "rcu" 00:05:27.338 Message: lib/mempool: Defining dependency "mempool" 00:05:27.338 Message: lib/mbuf: Defining dependency "mbuf" 00:05:27.338 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:27.338 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:27.338 Compiler for C supports arguments -mpclmul: YES 00:05:27.338 Compiler for C supports arguments -maes: YES 00:05:27.338 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:27.338 Compiler for C supports arguments -mavx512bw: YES 00:05:27.338 Compiler for C supports arguments -mavx512dq: YES 00:05:27.338 Compiler for C supports arguments -mavx512vl: YES 00:05:27.338 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:27.338 Compiler for C supports arguments -mavx2: YES 00:05:27.338 Compiler for C supports arguments -mavx: YES 00:05:27.338 Message: lib/net: Defining dependency "net" 00:05:27.338 Message: lib/meter: Defining dependency "meter" 00:05:27.338 Message: lib/ethdev: Defining dependency "ethdev" 00:05:27.338 Message: lib/pci: Defining dependency "pci" 00:05:27.338 Message: lib/cmdline: Defining dependency "cmdline" 00:05:27.338 Message: lib/hash: Defining dependency "hash" 00:05:27.338 Message: lib/timer: Defining dependency "timer" 00:05:27.338 Message: lib/compressdev: Defining dependency "compressdev" 00:05:27.338 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:27.338 Message: lib/dmadev: Defining dependency "dmadev" 00:05:27.338 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:27.338 Message: lib/power: Defining dependency "power" 00:05:27.338 Message: lib/reorder: Defining dependency "reorder" 00:05:27.338 Message: lib/security: Defining dependency "security" 00:05:27.338 Has header "linux/userfaultfd.h" : YES 00:05:27.338 Has header "linux/vduse.h" : YES 00:05:27.338 Message: lib/vhost: Defining dependency "vhost" 00:05:27.338 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:27.338 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:27.338 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:27.338 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:27.338 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:27.338 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:27.338 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:27.338 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:27.338 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:27.338 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:27.338 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:27.338 Configuring doxy-api-html.conf using configuration 00:05:27.338 Configuring doxy-api-man.conf using configuration 00:05:27.338 Program mandb found: YES (/usr/bin/mandb) 00:05:27.338 Program sphinx-build found: NO 00:05:27.338 Configuring rte_build_config.h using configuration 00:05:27.338 Message: 00:05:27.339 ================= 00:05:27.339 Applications Enabled 00:05:27.339 ================= 00:05:27.339 00:05:27.339 apps: 00:05:27.339 00:05:27.339 00:05:27.339 Message: 00:05:27.339 ================= 00:05:27.339 Libraries Enabled 00:05:27.339 ================= 00:05:27.339 00:05:27.339 libs: 00:05:27.339 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:27.339 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:27.339 cryptodev, dmadev, power, reorder, security, vhost, 00:05:27.339 00:05:27.339 Message: 00:05:27.339 =============== 00:05:27.339 Drivers Enabled 00:05:27.339 =============== 00:05:27.339 00:05:27.339 common: 00:05:27.339 00:05:27.339 bus: 00:05:27.339 pci, vdev, 00:05:27.339 mempool: 00:05:27.339 ring, 00:05:27.339 dma: 00:05:27.339 00:05:27.339 net: 00:05:27.339 00:05:27.339 crypto: 00:05:27.339 00:05:27.339 compress: 00:05:27.339 00:05:27.339 vdpa: 00:05:27.339 00:05:27.339 00:05:27.339 Message: 00:05:27.339 ================= 00:05:27.339 Content Skipped 00:05:27.339 ================= 00:05:27.339 00:05:27.339 apps: 00:05:27.339 dumpcap: explicitly disabled via build config 00:05:27.339 graph: explicitly disabled via build config 00:05:27.339 pdump: explicitly disabled via build config 00:05:27.339 proc-info: explicitly disabled via build config 00:05:27.339 test-acl: explicitly disabled via build config 00:05:27.339 test-bbdev: explicitly disabled via build config 00:05:27.339 test-cmdline: explicitly disabled via build config 00:05:27.339 test-compress-perf: explicitly disabled via build config 00:05:27.339 test-crypto-perf: explicitly disabled via build config 00:05:27.339 test-dma-perf: explicitly disabled via build config 00:05:27.339 test-eventdev: explicitly disabled via build config 00:05:27.339 test-fib: explicitly disabled via build config 00:05:27.339 test-flow-perf: explicitly disabled via build config 00:05:27.339 test-gpudev: explicitly disabled via build config 00:05:27.339 test-mldev: explicitly disabled via build config 00:05:27.339 test-pipeline: explicitly disabled via build config 00:05:27.339 test-pmd: explicitly disabled via build config 00:05:27.339 test-regex: explicitly disabled via build config 00:05:27.339 test-sad: explicitly disabled via build config 00:05:27.339 test-security-perf: explicitly disabled via build config 00:05:27.339 00:05:27.339 libs: 00:05:27.339 metrics: explicitly disabled via build config 00:05:27.339 acl: explicitly disabled via build config 00:05:27.339 bbdev: explicitly disabled via build config 00:05:27.339 bitratestats: explicitly disabled via build config 00:05:27.339 bpf: explicitly disabled via build config 00:05:27.339 cfgfile: explicitly disabled via build config 00:05:27.339 distributor: explicitly disabled via build config 00:05:27.339 efd: explicitly disabled via build config 00:05:27.339 eventdev: explicitly disabled via build config 00:05:27.339 dispatcher: explicitly disabled via build config 00:05:27.339 gpudev: explicitly disabled via build config 00:05:27.339 gro: explicitly disabled via build config 00:05:27.339 gso: explicitly disabled via build config 00:05:27.339 ip_frag: explicitly disabled via build config 00:05:27.339 jobstats: explicitly disabled via build config 00:05:27.339 latencystats: explicitly disabled via build config 00:05:27.339 lpm: explicitly disabled via build config 00:05:27.339 member: explicitly disabled via build config 00:05:27.339 pcapng: explicitly disabled via build config 00:05:27.339 rawdev: explicitly disabled via build config 00:05:27.339 regexdev: explicitly disabled via build config 00:05:27.339 mldev: explicitly disabled via build config 00:05:27.339 rib: explicitly disabled via build config 00:05:27.339 sched: explicitly disabled via build config 00:05:27.339 stack: explicitly disabled via build config 00:05:27.339 ipsec: explicitly disabled via build config 00:05:27.339 pdcp: explicitly disabled via build config 00:05:27.339 fib: explicitly disabled via build config 00:05:27.339 port: explicitly disabled via build config 00:05:27.339 pdump: explicitly disabled via build config 00:05:27.339 table: explicitly disabled via build config 00:05:27.339 pipeline: explicitly disabled via build config 00:05:27.339 graph: explicitly disabled via build config 00:05:27.339 node: explicitly disabled via build config 00:05:27.339 00:05:27.339 drivers: 00:05:27.339 common/cpt: not in enabled drivers build config 00:05:27.339 common/dpaax: not in enabled drivers build config 00:05:27.339 common/iavf: not in enabled drivers build config 00:05:27.339 common/idpf: not in enabled drivers build config 00:05:27.339 common/mvep: not in enabled drivers build config 00:05:27.339 common/octeontx: not in enabled drivers build config 00:05:27.339 bus/auxiliary: not in enabled drivers build config 00:05:27.339 bus/cdx: not in enabled drivers build config 00:05:27.339 bus/dpaa: not in enabled drivers build config 00:05:27.339 bus/fslmc: not in enabled drivers build config 00:05:27.339 bus/ifpga: not in enabled drivers build config 00:05:27.339 bus/platform: not in enabled drivers build config 00:05:27.339 bus/vmbus: not in enabled drivers build config 00:05:27.339 common/cnxk: not in enabled drivers build config 00:05:27.339 common/mlx5: not in enabled drivers build config 00:05:27.339 common/nfp: not in enabled drivers build config 00:05:27.339 common/qat: not in enabled drivers build config 00:05:27.339 common/sfc_efx: not in enabled drivers build config 00:05:27.339 mempool/bucket: not in enabled drivers build config 00:05:27.339 mempool/cnxk: not in enabled drivers build config 00:05:27.339 mempool/dpaa: not in enabled drivers build config 00:05:27.339 mempool/dpaa2: not in enabled drivers build config 00:05:27.339 mempool/octeontx: not in enabled drivers build config 00:05:27.339 mempool/stack: not in enabled drivers build config 00:05:27.339 dma/cnxk: not in enabled drivers build config 00:05:27.339 dma/dpaa: not in enabled drivers build config 00:05:27.339 dma/dpaa2: not in enabled drivers build config 00:05:27.339 dma/hisilicon: not in enabled drivers build config 00:05:27.339 dma/idxd: not in enabled drivers build config 00:05:27.339 dma/ioat: not in enabled drivers build config 00:05:27.339 dma/skeleton: not in enabled drivers build config 00:05:27.339 net/af_packet: not in enabled drivers build config 00:05:27.339 net/af_xdp: not in enabled drivers build config 00:05:27.339 net/ark: not in enabled drivers build config 00:05:27.339 net/atlantic: not in enabled drivers build config 00:05:27.339 net/avp: not in enabled drivers build config 00:05:27.339 net/axgbe: not in enabled drivers build config 00:05:27.339 net/bnx2x: not in enabled drivers build config 00:05:27.339 net/bnxt: not in enabled drivers build config 00:05:27.339 net/bonding: not in enabled drivers build config 00:05:27.339 net/cnxk: not in enabled drivers build config 00:05:27.339 net/cpfl: not in enabled drivers build config 00:05:27.339 net/cxgbe: not in enabled drivers build config 00:05:27.339 net/dpaa: not in enabled drivers build config 00:05:27.339 net/dpaa2: not in enabled drivers build config 00:05:27.339 net/e1000: not in enabled drivers build config 00:05:27.339 net/ena: not in enabled drivers build config 00:05:27.339 net/enetc: not in enabled drivers build config 00:05:27.339 net/enetfec: not in enabled drivers build config 00:05:27.339 net/enic: not in enabled drivers build config 00:05:27.339 net/failsafe: not in enabled drivers build config 00:05:27.339 net/fm10k: not in enabled drivers build config 00:05:27.339 net/gve: not in enabled drivers build config 00:05:27.339 net/hinic: not in enabled drivers build config 00:05:27.339 net/hns3: not in enabled drivers build config 00:05:27.339 net/i40e: not in enabled drivers build config 00:05:27.339 net/iavf: not in enabled drivers build config 00:05:27.339 net/ice: not in enabled drivers build config 00:05:27.339 net/idpf: not in enabled drivers build config 00:05:27.339 net/igc: not in enabled drivers build config 00:05:27.339 net/ionic: not in enabled drivers build config 00:05:27.339 net/ipn3ke: not in enabled drivers build config 00:05:27.339 net/ixgbe: not in enabled drivers build config 00:05:27.339 net/mana: not in enabled drivers build config 00:05:27.339 net/memif: not in enabled drivers build config 00:05:27.339 net/mlx4: not in enabled drivers build config 00:05:27.339 net/mlx5: not in enabled drivers build config 00:05:27.339 net/mvneta: not in enabled drivers build config 00:05:27.339 net/mvpp2: not in enabled drivers build config 00:05:27.339 net/netvsc: not in enabled drivers build config 00:05:27.339 net/nfb: not in enabled drivers build config 00:05:27.339 net/nfp: not in enabled drivers build config 00:05:27.339 net/ngbe: not in enabled drivers build config 00:05:27.339 net/null: not in enabled drivers build config 00:05:27.339 net/octeontx: not in enabled drivers build config 00:05:27.339 net/octeon_ep: not in enabled drivers build config 00:05:27.339 net/pcap: not in enabled drivers build config 00:05:27.339 net/pfe: not in enabled drivers build config 00:05:27.339 net/qede: not in enabled drivers build config 00:05:27.339 net/ring: not in enabled drivers build config 00:05:27.339 net/sfc: not in enabled drivers build config 00:05:27.339 net/softnic: not in enabled drivers build config 00:05:27.339 net/tap: not in enabled drivers build config 00:05:27.339 net/thunderx: not in enabled drivers build config 00:05:27.339 net/txgbe: not in enabled drivers build config 00:05:27.340 net/vdev_netvsc: not in enabled drivers build config 00:05:27.340 net/vhost: not in enabled drivers build config 00:05:27.340 net/virtio: not in enabled drivers build config 00:05:27.340 net/vmxnet3: not in enabled drivers build config 00:05:27.340 raw/*: missing internal dependency, "rawdev" 00:05:27.340 crypto/armv8: not in enabled drivers build config 00:05:27.340 crypto/bcmfs: not in enabled drivers build config 00:05:27.340 crypto/caam_jr: not in enabled drivers build config 00:05:27.340 crypto/ccp: not in enabled drivers build config 00:05:27.340 crypto/cnxk: not in enabled drivers build config 00:05:27.340 crypto/dpaa_sec: not in enabled drivers build config 00:05:27.340 crypto/dpaa2_sec: not in enabled drivers build config 00:05:27.340 crypto/ipsec_mb: not in enabled drivers build config 00:05:27.340 crypto/mlx5: not in enabled drivers build config 00:05:27.340 crypto/mvsam: not in enabled drivers build config 00:05:27.340 crypto/nitrox: not in enabled drivers build config 00:05:27.340 crypto/null: not in enabled drivers build config 00:05:27.340 crypto/octeontx: not in enabled drivers build config 00:05:27.340 crypto/openssl: not in enabled drivers build config 00:05:27.340 crypto/scheduler: not in enabled drivers build config 00:05:27.340 crypto/uadk: not in enabled drivers build config 00:05:27.340 crypto/virtio: not in enabled drivers build config 00:05:27.340 compress/isal: not in enabled drivers build config 00:05:27.340 compress/mlx5: not in enabled drivers build config 00:05:27.340 compress/octeontx: not in enabled drivers build config 00:05:27.340 compress/zlib: not in enabled drivers build config 00:05:27.340 regex/*: missing internal dependency, "regexdev" 00:05:27.340 ml/*: missing internal dependency, "mldev" 00:05:27.340 vdpa/ifc: not in enabled drivers build config 00:05:27.340 vdpa/mlx5: not in enabled drivers build config 00:05:27.340 vdpa/nfp: not in enabled drivers build config 00:05:27.340 vdpa/sfc: not in enabled drivers build config 00:05:27.340 event/*: missing internal dependency, "eventdev" 00:05:27.340 baseband/*: missing internal dependency, "bbdev" 00:05:27.340 gpu/*: missing internal dependency, "gpudev" 00:05:27.340 00:05:27.340 00:05:27.340 Build targets in project: 85 00:05:27.340 00:05:27.340 DPDK 23.11.0 00:05:27.340 00:05:27.340 User defined options 00:05:27.340 buildtype : debug 00:05:27.340 default_library : shared 00:05:27.340 libdir : lib 00:05:27.340 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:27.340 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:05:27.340 c_link_args : 00:05:27.340 cpu_instruction_set: native 00:05:27.340 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:27.340 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:27.340 enable_docs : false 00:05:27.340 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:27.340 enable_kmods : false 00:05:27.340 tests : false 00:05:27.340 00:05:27.340 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:27.905 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:27.905 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:27.905 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:27.905 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:27.905 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:27.905 [5/265] Linking static target lib/librte_kvargs.a 00:05:27.905 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:27.905 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:27.905 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:27.905 [9/265] Linking static target lib/librte_log.a 00:05:28.162 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:28.419 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.677 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:28.677 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:28.677 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:28.677 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:28.934 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:28.934 [17/265] Linking static target lib/librte_telemetry.a 00:05:28.934 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:28.934 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.934 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:28.934 [21/265] Linking target lib/librte_log.so.24.0 00:05:29.192 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:29.192 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:29.192 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:05:29.450 [25/265] Linking target lib/librte_kvargs.so.24.0 00:05:29.450 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:29.450 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:29.450 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:29.707 [29/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:05:29.707 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:29.707 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:29.707 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:29.965 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.965 [34/265] Linking target lib/librte_telemetry.so.24.0 00:05:29.965 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:29.965 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:29.965 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:30.222 [38/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:05:30.222 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:30.222 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:30.479 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:30.479 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:30.479 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:30.479 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:30.736 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:30.736 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:30.993 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:30.993 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:31.251 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:31.251 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:31.251 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:31.251 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:31.511 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:31.511 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:31.769 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:31.769 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:31.769 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:32.027 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:32.027 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:32.027 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:32.027 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:32.027 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:32.027 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:32.027 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:32.286 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:32.286 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:32.543 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:32.543 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:32.828 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:32.828 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:33.086 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:33.086 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:33.086 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:33.086 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:33.086 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:33.086 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:33.086 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:33.086 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:33.086 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:33.086 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:33.344 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:33.911 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:33.911 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:34.170 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:34.170 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:34.170 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:34.170 [87/265] Linking static target lib/librte_ring.a 00:05:34.170 [88/265] Linking static target lib/librte_eal.a 00:05:34.170 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:34.429 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:34.429 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:34.429 [92/265] Linking static target lib/librte_rcu.a 00:05:34.429 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:34.429 [94/265] Linking static target lib/librte_mempool.a 00:05:34.687 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:34.687 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.945 [97/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.945 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:35.204 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:35.204 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:35.204 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:35.204 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:35.204 [103/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:35.204 [104/265] Linking static target lib/librte_mbuf.a 00:05:35.772 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:35.772 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:35.772 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:35.772 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:35.772 [109/265] Linking static target lib/librte_net.a 00:05:35.772 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.030 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:36.030 [112/265] Linking static target lib/librte_meter.a 00:05:36.030 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:36.287 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.287 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:36.545 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:36.545 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:36.545 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.545 [119/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.804 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:37.063 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:37.063 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:37.321 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:37.321 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:37.322 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:37.322 [126/265] Linking static target lib/librte_pci.a 00:05:37.322 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:37.579 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:37.579 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:37.579 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:37.838 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:37.838 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:37.838 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.838 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:37.838 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:37.838 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:37.838 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:37.838 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:37.838 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:37.838 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:37.838 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:37.838 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:37.838 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:38.096 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:38.096 [145/265] Linking static target lib/librte_ethdev.a 00:05:38.096 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:38.355 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:38.355 [148/265] Linking static target lib/librte_cmdline.a 00:05:38.614 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:38.614 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:38.614 [151/265] Linking static target lib/librte_timer.a 00:05:38.615 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:38.873 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:38.873 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:38.873 [155/265] Linking static target lib/librte_compressdev.a 00:05:38.873 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:38.873 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:38.873 [158/265] Linking static target lib/librte_hash.a 00:05:39.130 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:39.130 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.387 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:39.387 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:39.644 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:39.644 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:39.901 [165/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.901 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:39.901 [167/265] Linking static target lib/librte_dmadev.a 00:05:39.901 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:39.901 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:39.901 [170/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.901 [171/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:40.158 [172/265] Linking static target lib/librte_cryptodev.a 00:05:40.159 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.159 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:40.415 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:40.415 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.672 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:40.672 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:40.672 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:40.672 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:40.672 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:40.930 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:40.930 [183/265] Linking static target lib/librte_power.a 00:05:41.188 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:41.188 [185/265] Linking static target lib/librte_reorder.a 00:05:41.447 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:41.447 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:41.447 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:41.447 [189/265] Linking static target lib/librte_security.a 00:05:41.447 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:41.705 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.705 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:41.962 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.962 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.962 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:42.220 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:42.220 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:42.220 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.478 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:42.478 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:42.736 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:42.736 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:42.736 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:42.736 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:42.994 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:42.994 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:42.994 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:42.994 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:42.994 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:43.252 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:43.252 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:43.252 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:43.252 [213/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:43.252 [214/265] Linking static target drivers/librte_bus_vdev.a 00:05:43.252 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:43.252 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:43.252 [217/265] Linking static target drivers/librte_bus_pci.a 00:05:43.511 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.511 [219/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:43.511 [220/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:43.769 [221/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.769 [222/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:43.769 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:43.769 [224/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:43.769 [225/265] Linking static target drivers/librte_mempool_ring.a 00:05:44.335 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:44.335 [227/265] Linking static target lib/librte_vhost.a 00:05:45.270 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.529 [229/265] Linking target lib/librte_eal.so.24.0 00:05:45.529 [230/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.529 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:45.787 [232/265] Linking target lib/librte_pci.so.24.0 00:05:45.787 [233/265] Linking target lib/librte_meter.so.24.0 00:05:45.787 [234/265] Linking target lib/librte_ring.so.24.0 00:05:45.787 [235/265] Linking target lib/librte_timer.so.24.0 00:05:45.787 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:05:45.787 [237/265] Linking target lib/librte_dmadev.so.24.0 00:05:45.787 [238/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.787 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:05:45.787 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:45.787 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:45.787 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:45.787 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:05:45.787 [244/265] Linking target lib/librte_rcu.so.24.0 00:05:45.787 [245/265] Linking target lib/librte_mempool.so.24.0 00:05:45.787 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:05:46.046 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:46.046 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:46.046 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:05:46.046 [250/265] Linking target lib/librte_mbuf.so.24.0 00:05:46.305 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:46.305 [252/265] Linking target lib/librte_net.so.24.0 00:05:46.305 [253/265] Linking target lib/librte_reorder.so.24.0 00:05:46.305 [254/265] Linking target lib/librte_compressdev.so.24.0 00:05:46.305 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:05:46.564 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:46.564 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:46.564 [258/265] Linking target lib/librte_security.so.24.0 00:05:46.564 [259/265] Linking target lib/librte_hash.so.24.0 00:05:46.564 [260/265] Linking target lib/librte_cmdline.so.24.0 00:05:46.564 [261/265] Linking target lib/librte_ethdev.so.24.0 00:05:46.823 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:05:46.823 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:05:46.823 [264/265] Linking target lib/librte_power.so.24.0 00:05:46.823 [265/265] Linking target lib/librte_vhost.so.24.0 00:05:46.823 INFO: autodetecting backend as ninja 00:05:46.823 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:48.281 CC lib/ut/ut.o 00:05:48.281 CC lib/log/log_flags.o 00:05:48.281 CC lib/log/log_deprecated.o 00:05:48.281 CC lib/log/log.o 00:05:48.281 CC lib/ut_mock/mock.o 00:05:48.281 LIB libspdk_ut_mock.a 00:05:48.281 LIB libspdk_log.a 00:05:48.281 LIB libspdk_ut.a 00:05:48.281 SO libspdk_ut_mock.so.5.0 00:05:48.281 SO libspdk_ut.so.1.0 00:05:48.281 SO libspdk_log.so.6.1 00:05:48.281 SYMLINK libspdk_ut_mock.so 00:05:48.281 SYMLINK libspdk_ut.so 00:05:48.281 SYMLINK libspdk_log.so 00:05:48.542 CC lib/ioat/ioat.o 00:05:48.542 CXX lib/trace_parser/trace.o 00:05:48.542 CC lib/dma/dma.o 00:05:48.542 CC lib/util/bit_array.o 00:05:48.542 CC lib/util/base64.o 00:05:48.542 CC lib/util/crc16.o 00:05:48.542 CC lib/util/cpuset.o 00:05:48.542 CC lib/util/crc32c.o 00:05:48.542 CC lib/util/crc32.o 00:05:48.542 CC lib/vfio_user/host/vfio_user_pci.o 00:05:48.542 CC lib/vfio_user/host/vfio_user.o 00:05:48.542 CC lib/util/crc32_ieee.o 00:05:48.542 CC lib/util/crc64.o 00:05:48.799 CC lib/util/dif.o 00:05:48.799 LIB libspdk_dma.a 00:05:48.799 CC lib/util/fd.o 00:05:48.799 CC lib/util/file.o 00:05:48.799 SO libspdk_dma.so.3.0 00:05:48.799 LIB libspdk_ioat.a 00:05:48.799 SO libspdk_ioat.so.6.0 00:05:48.799 CC lib/util/hexlify.o 00:05:48.799 SYMLINK libspdk_dma.so 00:05:48.799 CC lib/util/iov.o 00:05:48.799 CC lib/util/math.o 00:05:48.799 CC lib/util/pipe.o 00:05:48.799 SYMLINK libspdk_ioat.so 00:05:48.799 CC lib/util/strerror_tls.o 00:05:48.799 LIB libspdk_vfio_user.a 00:05:48.799 CC lib/util/string.o 00:05:48.799 SO libspdk_vfio_user.so.4.0 00:05:48.799 CC lib/util/uuid.o 00:05:49.057 SYMLINK libspdk_vfio_user.so 00:05:49.057 CC lib/util/fd_group.o 00:05:49.057 CC lib/util/xor.o 00:05:49.057 CC lib/util/zipf.o 00:05:49.315 LIB libspdk_util.a 00:05:49.573 SO libspdk_util.so.8.0 00:05:49.573 SYMLINK libspdk_util.so 00:05:49.573 LIB libspdk_trace_parser.a 00:05:49.573 SO libspdk_trace_parser.so.4.0 00:05:49.573 CC lib/conf/conf.o 00:05:49.831 CC lib/rdma/common.o 00:05:49.831 CC lib/rdma/rdma_verbs.o 00:05:49.831 CC lib/vmd/vmd.o 00:05:49.831 CC lib/vmd/led.o 00:05:49.831 CC lib/json/json_parse.o 00:05:49.831 CC lib/json/json_util.o 00:05:49.831 CC lib/idxd/idxd.o 00:05:49.831 CC lib/env_dpdk/env.o 00:05:49.831 SYMLINK libspdk_trace_parser.so 00:05:49.831 CC lib/idxd/idxd_user.o 00:05:49.831 CC lib/idxd/idxd_kernel.o 00:05:49.831 LIB libspdk_conf.a 00:05:49.831 CC lib/json/json_write.o 00:05:49.831 CC lib/env_dpdk/memory.o 00:05:49.831 SO libspdk_conf.so.5.0 00:05:50.089 CC lib/env_dpdk/pci.o 00:05:50.089 CC lib/env_dpdk/init.o 00:05:50.089 LIB libspdk_rdma.a 00:05:50.089 SYMLINK libspdk_conf.so 00:05:50.089 CC lib/env_dpdk/threads.o 00:05:50.089 CC lib/env_dpdk/pci_ioat.o 00:05:50.089 SO libspdk_rdma.so.5.0 00:05:50.089 SYMLINK libspdk_rdma.so 00:05:50.089 CC lib/env_dpdk/pci_virtio.o 00:05:50.089 CC lib/env_dpdk/pci_vmd.o 00:05:50.089 CC lib/env_dpdk/pci_idxd.o 00:05:50.347 CC lib/env_dpdk/pci_event.o 00:05:50.347 LIB libspdk_idxd.a 00:05:50.347 CC lib/env_dpdk/sigbus_handler.o 00:05:50.347 LIB libspdk_json.a 00:05:50.347 CC lib/env_dpdk/pci_dpdk.o 00:05:50.347 SO libspdk_idxd.so.11.0 00:05:50.347 SO libspdk_json.so.5.1 00:05:50.347 LIB libspdk_vmd.a 00:05:50.347 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:50.347 SYMLINK libspdk_idxd.so 00:05:50.347 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:50.347 SO libspdk_vmd.so.5.0 00:05:50.347 SYMLINK libspdk_json.so 00:05:50.347 SYMLINK libspdk_vmd.so 00:05:50.604 CC lib/jsonrpc/jsonrpc_server.o 00:05:50.604 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:50.605 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:50.605 CC lib/jsonrpc/jsonrpc_client.o 00:05:50.863 LIB libspdk_jsonrpc.a 00:05:50.863 SO libspdk_jsonrpc.so.5.1 00:05:51.122 SYMLINK libspdk_jsonrpc.so 00:05:51.122 LIB libspdk_env_dpdk.a 00:05:51.122 CC lib/rpc/rpc.o 00:05:51.380 SO libspdk_env_dpdk.so.13.0 00:05:51.380 LIB libspdk_rpc.a 00:05:51.380 SO libspdk_rpc.so.5.0 00:05:51.380 SYMLINK libspdk_env_dpdk.so 00:05:51.638 SYMLINK libspdk_rpc.so 00:05:51.638 CC lib/notify/notify_rpc.o 00:05:51.638 CC lib/notify/notify.o 00:05:51.638 CC lib/trace/trace.o 00:05:51.638 CC lib/trace/trace_rpc.o 00:05:51.638 CC lib/trace/trace_flags.o 00:05:51.638 CC lib/sock/sock.o 00:05:51.638 CC lib/sock/sock_rpc.o 00:05:51.897 LIB libspdk_notify.a 00:05:51.897 SO libspdk_notify.so.5.0 00:05:51.897 SYMLINK libspdk_notify.so 00:05:51.897 LIB libspdk_trace.a 00:05:52.155 SO libspdk_trace.so.9.0 00:05:52.155 SYMLINK libspdk_trace.so 00:05:52.155 LIB libspdk_sock.a 00:05:52.155 SO libspdk_sock.so.8.0 00:05:52.412 CC lib/thread/iobuf.o 00:05:52.412 CC lib/thread/thread.o 00:05:52.412 SYMLINK libspdk_sock.so 00:05:52.412 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:52.412 CC lib/nvme/nvme_ns_cmd.o 00:05:52.412 CC lib/nvme/nvme_fabric.o 00:05:52.412 CC lib/nvme/nvme_ctrlr.o 00:05:52.412 CC lib/nvme/nvme_pcie.o 00:05:52.412 CC lib/nvme/nvme_ns.o 00:05:52.412 CC lib/nvme/nvme_qpair.o 00:05:52.412 CC lib/nvme/nvme_pcie_common.o 00:05:52.670 CC lib/nvme/nvme.o 00:05:53.236 CC lib/nvme/nvme_quirks.o 00:05:53.236 CC lib/nvme/nvme_transport.o 00:05:53.236 CC lib/nvme/nvme_discovery.o 00:05:53.493 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:53.494 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:53.494 CC lib/nvme/nvme_tcp.o 00:05:53.752 CC lib/nvme/nvme_opal.o 00:05:53.752 CC lib/nvme/nvme_io_msg.o 00:05:54.060 CC lib/nvme/nvme_poll_group.o 00:05:54.060 LIB libspdk_thread.a 00:05:54.060 CC lib/nvme/nvme_zns.o 00:05:54.060 CC lib/nvme/nvme_cuse.o 00:05:54.060 SO libspdk_thread.so.9.0 00:05:54.060 SYMLINK libspdk_thread.so 00:05:54.060 CC lib/nvme/nvme_vfio_user.o 00:05:54.318 CC lib/accel/accel.o 00:05:54.318 CC lib/blob/blobstore.o 00:05:54.318 CC lib/init/json_config.o 00:05:54.575 CC lib/blob/request.o 00:05:54.575 CC lib/init/subsystem.o 00:05:54.832 CC lib/blob/zeroes.o 00:05:54.832 CC lib/blob/blob_bs_dev.o 00:05:54.832 CC lib/init/subsystem_rpc.o 00:05:54.832 CC lib/nvme/nvme_rdma.o 00:05:54.832 CC lib/accel/accel_rpc.o 00:05:55.088 CC lib/virtio/virtio.o 00:05:55.088 CC lib/init/rpc.o 00:05:55.088 CC lib/virtio/virtio_vhost_user.o 00:05:55.088 CC lib/vfu_tgt/tgt_endpoint.o 00:05:55.088 CC lib/accel/accel_sw.o 00:05:55.088 CC lib/vfu_tgt/tgt_rpc.o 00:05:55.088 CC lib/virtio/virtio_vfio_user.o 00:05:55.088 LIB libspdk_init.a 00:05:55.088 SO libspdk_init.so.4.0 00:05:55.346 SYMLINK libspdk_init.so 00:05:55.346 CC lib/virtio/virtio_pci.o 00:05:55.346 LIB libspdk_accel.a 00:05:55.346 LIB libspdk_vfu_tgt.a 00:05:55.346 CC lib/event/reactor.o 00:05:55.346 CC lib/event/app.o 00:05:55.346 SO libspdk_accel.so.14.0 00:05:55.346 CC lib/event/log_rpc.o 00:05:55.346 CC lib/event/app_rpc.o 00:05:55.346 CC lib/event/scheduler_static.o 00:05:55.346 SO libspdk_vfu_tgt.so.2.0 00:05:55.664 SYMLINK libspdk_accel.so 00:05:55.664 LIB libspdk_virtio.a 00:05:55.664 SYMLINK libspdk_vfu_tgt.so 00:05:55.664 SO libspdk_virtio.so.6.0 00:05:55.664 SYMLINK libspdk_virtio.so 00:05:55.664 CC lib/bdev/bdev_zone.o 00:05:55.664 CC lib/bdev/bdev.o 00:05:55.664 CC lib/bdev/bdev_rpc.o 00:05:55.664 CC lib/bdev/part.o 00:05:55.664 CC lib/bdev/scsi_nvme.o 00:05:55.948 LIB libspdk_event.a 00:05:55.948 SO libspdk_event.so.12.0 00:05:55.948 SYMLINK libspdk_event.so 00:05:56.206 LIB libspdk_nvme.a 00:05:56.464 SO libspdk_nvme.so.12.0 00:05:56.723 SYMLINK libspdk_nvme.so 00:05:57.655 LIB libspdk_blob.a 00:05:57.655 SO libspdk_blob.so.10.1 00:05:57.655 SYMLINK libspdk_blob.so 00:05:57.915 CC lib/lvol/lvol.o 00:05:57.915 CC lib/blobfs/blobfs.o 00:05:57.915 CC lib/blobfs/tree.o 00:05:58.481 LIB libspdk_bdev.a 00:05:58.737 SO libspdk_bdev.so.14.0 00:05:58.737 LIB libspdk_blobfs.a 00:05:58.737 SO libspdk_blobfs.so.9.0 00:05:58.737 SYMLINK libspdk_bdev.so 00:05:58.737 LIB libspdk_lvol.a 00:05:58.737 SO libspdk_lvol.so.9.1 00:05:58.737 SYMLINK libspdk_blobfs.so 00:05:58.737 CC lib/ftl/ftl_core.o 00:05:58.995 CC lib/ftl/ftl_init.o 00:05:58.995 CC lib/ftl/ftl_layout.o 00:05:58.995 CC lib/ftl/ftl_debug.o 00:05:58.995 CC lib/ftl/ftl_io.o 00:05:58.995 CC lib/scsi/dev.o 00:05:58.995 CC lib/nvmf/ctrlr.o 00:05:58.995 CC lib/nbd/nbd.o 00:05:58.995 CC lib/ublk/ublk.o 00:05:58.995 SYMLINK libspdk_lvol.so 00:05:58.995 CC lib/ublk/ublk_rpc.o 00:05:58.995 CC lib/scsi/lun.o 00:05:58.995 CC lib/ftl/ftl_sb.o 00:05:59.252 CC lib/ftl/ftl_l2p.o 00:05:59.252 CC lib/ftl/ftl_l2p_flat.o 00:05:59.252 CC lib/nvmf/ctrlr_discovery.o 00:05:59.252 CC lib/nvmf/ctrlr_bdev.o 00:05:59.252 CC lib/ftl/ftl_nv_cache.o 00:05:59.509 CC lib/nbd/nbd_rpc.o 00:05:59.509 CC lib/ftl/ftl_band.o 00:05:59.509 CC lib/nvmf/subsystem.o 00:05:59.509 CC lib/nvmf/nvmf.o 00:05:59.509 CC lib/scsi/port.o 00:05:59.509 LIB libspdk_ublk.a 00:05:59.509 SO libspdk_ublk.so.2.0 00:05:59.509 LIB libspdk_nbd.a 00:05:59.509 SO libspdk_nbd.so.6.0 00:05:59.766 SYMLINK libspdk_ublk.so 00:05:59.766 CC lib/scsi/scsi.o 00:05:59.766 CC lib/scsi/scsi_bdev.o 00:05:59.766 SYMLINK libspdk_nbd.so 00:05:59.766 CC lib/ftl/ftl_band_ops.o 00:05:59.766 CC lib/ftl/ftl_writer.o 00:05:59.766 CC lib/nvmf/nvmf_rpc.o 00:05:59.766 CC lib/nvmf/transport.o 00:06:00.024 CC lib/nvmf/tcp.o 00:06:00.024 CC lib/scsi/scsi_pr.o 00:06:00.024 CC lib/nvmf/vfio_user.o 00:06:00.283 CC lib/scsi/scsi_rpc.o 00:06:00.283 CC lib/ftl/ftl_rq.o 00:06:00.283 CC lib/scsi/task.o 00:06:00.541 CC lib/ftl/ftl_reloc.o 00:06:00.541 CC lib/nvmf/rdma.o 00:06:00.541 CC lib/ftl/ftl_l2p_cache.o 00:06:00.541 CC lib/ftl/ftl_p2l.o 00:06:00.541 LIB libspdk_scsi.a 00:06:00.798 SO libspdk_scsi.so.8.0 00:06:00.798 CC lib/ftl/mngt/ftl_mngt.o 00:06:00.798 SYMLINK libspdk_scsi.so 00:06:00.798 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:00.798 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:00.798 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:00.798 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:00.798 CC lib/iscsi/conn.o 00:06:01.055 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:01.055 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:01.055 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:01.055 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:01.055 CC lib/vhost/vhost.o 00:06:01.055 CC lib/vhost/vhost_rpc.o 00:06:01.313 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:01.313 CC lib/iscsi/init_grp.o 00:06:01.313 CC lib/iscsi/iscsi.o 00:06:01.313 CC lib/iscsi/md5.o 00:06:01.585 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:01.585 CC lib/vhost/vhost_scsi.o 00:06:01.585 CC lib/vhost/vhost_blk.o 00:06:01.585 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:01.585 CC lib/vhost/rte_vhost_user.o 00:06:01.585 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:01.585 CC lib/iscsi/param.o 00:06:01.842 CC lib/iscsi/portal_grp.o 00:06:01.842 CC lib/ftl/utils/ftl_conf.o 00:06:01.842 CC lib/iscsi/tgt_node.o 00:06:01.842 CC lib/ftl/utils/ftl_md.o 00:06:02.100 CC lib/iscsi/iscsi_subsystem.o 00:06:02.100 CC lib/iscsi/iscsi_rpc.o 00:06:02.100 CC lib/iscsi/task.o 00:06:02.358 CC lib/ftl/utils/ftl_mempool.o 00:06:02.358 CC lib/ftl/utils/ftl_bitmap.o 00:06:02.358 CC lib/ftl/utils/ftl_property.o 00:06:02.358 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:02.358 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:02.358 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:02.616 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:02.616 LIB libspdk_nvmf.a 00:06:02.616 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:02.616 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:02.616 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:02.616 SO libspdk_nvmf.so.17.0 00:06:02.616 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:02.616 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:02.616 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:02.616 CC lib/ftl/base/ftl_base_dev.o 00:06:02.616 LIB libspdk_iscsi.a 00:06:02.616 LIB libspdk_vhost.a 00:06:02.874 SO libspdk_iscsi.so.7.0 00:06:02.874 CC lib/ftl/base/ftl_base_bdev.o 00:06:02.874 CC lib/ftl/ftl_trace.o 00:06:02.874 SO libspdk_vhost.so.7.1 00:06:02.874 SYMLINK libspdk_nvmf.so 00:06:02.874 SYMLINK libspdk_vhost.so 00:06:02.874 SYMLINK libspdk_iscsi.so 00:06:03.133 LIB libspdk_ftl.a 00:06:03.392 SO libspdk_ftl.so.8.0 00:06:03.650 SYMLINK libspdk_ftl.so 00:06:03.908 CC module/vfu_device/vfu_virtio.o 00:06:03.908 CC module/env_dpdk/env_dpdk_rpc.o 00:06:03.908 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:03.908 CC module/blob/bdev/blob_bdev.o 00:06:03.908 CC module/accel/error/accel_error.o 00:06:03.908 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:03.908 CC module/accel/ioat/accel_ioat.o 00:06:03.908 CC module/scheduler/gscheduler/gscheduler.o 00:06:03.908 CC module/sock/posix/posix.o 00:06:03.908 CC module/accel/dsa/accel_dsa.o 00:06:03.908 LIB libspdk_env_dpdk_rpc.a 00:06:03.908 SO libspdk_env_dpdk_rpc.so.5.0 00:06:04.165 LIB libspdk_scheduler_dpdk_governor.a 00:06:04.165 LIB libspdk_scheduler_gscheduler.a 00:06:04.165 SYMLINK libspdk_env_dpdk_rpc.so 00:06:04.165 SO libspdk_scheduler_dpdk_governor.so.3.0 00:06:04.165 SO libspdk_scheduler_gscheduler.so.3.0 00:06:04.165 CC module/accel/error/accel_error_rpc.o 00:06:04.165 CC module/accel/ioat/accel_ioat_rpc.o 00:06:04.165 LIB libspdk_scheduler_dynamic.a 00:06:04.165 SO libspdk_scheduler_dynamic.so.3.0 00:06:04.165 SYMLINK libspdk_scheduler_gscheduler.so 00:06:04.165 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:04.165 CC module/accel/dsa/accel_dsa_rpc.o 00:06:04.165 CC module/vfu_device/vfu_virtio_blk.o 00:06:04.165 CC module/vfu_device/vfu_virtio_scsi.o 00:06:04.165 SYMLINK libspdk_scheduler_dynamic.so 00:06:04.165 CC module/vfu_device/vfu_virtio_rpc.o 00:06:04.165 CC module/accel/iaa/accel_iaa.o 00:06:04.165 LIB libspdk_blob_bdev.a 00:06:04.165 LIB libspdk_accel_error.a 00:06:04.165 SO libspdk_blob_bdev.so.10.1 00:06:04.165 LIB libspdk_accel_ioat.a 00:06:04.165 SO libspdk_accel_error.so.1.0 00:06:04.423 SO libspdk_accel_ioat.so.5.0 00:06:04.423 LIB libspdk_accel_dsa.a 00:06:04.423 SYMLINK libspdk_blob_bdev.so 00:06:04.423 SYMLINK libspdk_accel_error.so 00:06:04.423 SYMLINK libspdk_accel_ioat.so 00:06:04.423 SO libspdk_accel_dsa.so.4.0 00:06:04.423 CC module/accel/iaa/accel_iaa_rpc.o 00:06:04.423 SYMLINK libspdk_accel_dsa.so 00:06:04.423 CC module/blobfs/bdev/blobfs_bdev.o 00:06:04.423 CC module/bdev/delay/vbdev_delay.o 00:06:04.423 LIB libspdk_accel_iaa.a 00:06:04.423 CC module/bdev/gpt/gpt.o 00:06:04.423 CC module/bdev/lvol/vbdev_lvol.o 00:06:04.423 CC module/bdev/error/vbdev_error.o 00:06:04.681 SO libspdk_accel_iaa.so.2.0 00:06:04.681 LIB libspdk_vfu_device.a 00:06:04.681 CC module/bdev/malloc/bdev_malloc.o 00:06:04.681 CC module/bdev/null/bdev_null.o 00:06:04.681 SO libspdk_vfu_device.so.2.0 00:06:04.681 SYMLINK libspdk_accel_iaa.so 00:06:04.681 CC module/bdev/error/vbdev_error_rpc.o 00:06:04.681 SYMLINK libspdk_vfu_device.so 00:06:04.681 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:04.681 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:04.681 CC module/bdev/gpt/vbdev_gpt.o 00:06:04.681 LIB libspdk_sock_posix.a 00:06:04.681 SO libspdk_sock_posix.so.5.0 00:06:04.939 CC module/bdev/null/bdev_null_rpc.o 00:06:04.939 SYMLINK libspdk_sock_posix.so 00:06:04.939 LIB libspdk_bdev_error.a 00:06:04.939 LIB libspdk_blobfs_bdev.a 00:06:04.939 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:04.939 SO libspdk_bdev_error.so.5.0 00:06:04.939 SO libspdk_blobfs_bdev.so.5.0 00:06:04.939 CC module/bdev/nvme/bdev_nvme.o 00:06:04.939 LIB libspdk_bdev_null.a 00:06:04.939 SYMLINK libspdk_blobfs_bdev.so 00:06:04.939 SYMLINK libspdk_bdev_error.so 00:06:04.939 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:04.939 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:04.939 CC module/bdev/passthru/vbdev_passthru.o 00:06:04.939 LIB libspdk_bdev_gpt.a 00:06:04.939 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:04.939 CC module/bdev/nvme/nvme_rpc.o 00:06:04.939 SO libspdk_bdev_null.so.5.0 00:06:04.939 LIB libspdk_bdev_lvol.a 00:06:04.939 SO libspdk_bdev_gpt.so.5.0 00:06:05.197 SO libspdk_bdev_lvol.so.5.0 00:06:05.197 LIB libspdk_bdev_delay.a 00:06:05.197 SYMLINK libspdk_bdev_gpt.so 00:06:05.197 SO libspdk_bdev_delay.so.5.0 00:06:05.197 LIB libspdk_bdev_malloc.a 00:06:05.197 SYMLINK libspdk_bdev_null.so 00:06:05.197 SYMLINK libspdk_bdev_lvol.so 00:06:05.197 SO libspdk_bdev_malloc.so.5.0 00:06:05.197 SYMLINK libspdk_bdev_delay.so 00:06:05.197 CC module/bdev/nvme/bdev_mdns_client.o 00:06:05.197 CC module/bdev/nvme/vbdev_opal.o 00:06:05.197 CC module/bdev/raid/bdev_raid.o 00:06:05.197 CC module/bdev/raid/bdev_raid_rpc.o 00:06:05.197 SYMLINK libspdk_bdev_malloc.so 00:06:05.453 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:05.453 CC module/bdev/split/vbdev_split.o 00:06:05.453 LIB libspdk_bdev_passthru.a 00:06:05.453 SO libspdk_bdev_passthru.so.5.0 00:06:05.453 CC module/bdev/aio/bdev_aio.o 00:06:05.453 SYMLINK libspdk_bdev_passthru.so 00:06:05.453 CC module/bdev/raid/bdev_raid_sb.o 00:06:05.711 CC module/bdev/ftl/bdev_ftl.o 00:06:05.711 CC module/bdev/raid/raid0.o 00:06:05.711 CC module/bdev/split/vbdev_split_rpc.o 00:06:05.711 CC module/bdev/iscsi/bdev_iscsi.o 00:06:05.711 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:05.711 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:05.711 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:05.711 CC module/bdev/aio/bdev_aio_rpc.o 00:06:05.711 LIB libspdk_bdev_split.a 00:06:05.971 SO libspdk_bdev_split.so.5.0 00:06:05.971 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:05.971 LIB libspdk_bdev_zone_block.a 00:06:05.971 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:05.971 SO libspdk_bdev_zone_block.so.5.0 00:06:05.971 SYMLINK libspdk_bdev_split.so 00:06:05.971 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:05.971 SYMLINK libspdk_bdev_zone_block.so 00:06:05.971 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:05.971 LIB libspdk_bdev_aio.a 00:06:05.971 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:05.971 CC module/bdev/raid/raid1.o 00:06:05.971 SO libspdk_bdev_aio.so.5.0 00:06:06.228 SYMLINK libspdk_bdev_aio.so 00:06:06.228 CC module/bdev/raid/concat.o 00:06:06.228 LIB libspdk_bdev_ftl.a 00:06:06.228 LIB libspdk_bdev_iscsi.a 00:06:06.228 SO libspdk_bdev_ftl.so.5.0 00:06:06.228 SO libspdk_bdev_iscsi.so.5.0 00:06:06.228 SYMLINK libspdk_bdev_ftl.so 00:06:06.228 LIB libspdk_bdev_virtio.a 00:06:06.228 SYMLINK libspdk_bdev_iscsi.so 00:06:06.228 SO libspdk_bdev_virtio.so.5.0 00:06:06.487 LIB libspdk_bdev_raid.a 00:06:06.487 SYMLINK libspdk_bdev_virtio.so 00:06:06.487 SO libspdk_bdev_raid.so.5.0 00:06:06.487 SYMLINK libspdk_bdev_raid.so 00:06:07.420 LIB libspdk_bdev_nvme.a 00:06:07.420 SO libspdk_bdev_nvme.so.6.0 00:06:07.420 SYMLINK libspdk_bdev_nvme.so 00:06:08.026 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:08.026 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:08.026 CC module/event/subsystems/scheduler/scheduler.o 00:06:08.026 CC module/event/subsystems/sock/sock.o 00:06:08.026 CC module/event/subsystems/iobuf/iobuf.o 00:06:08.026 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:08.026 CC module/event/subsystems/vmd/vmd.o 00:06:08.026 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:08.026 LIB libspdk_event_sock.a 00:06:08.026 LIB libspdk_event_vfu_tgt.a 00:06:08.026 LIB libspdk_event_vhost_blk.a 00:06:08.026 LIB libspdk_event_scheduler.a 00:06:08.026 LIB libspdk_event_iobuf.a 00:06:08.026 SO libspdk_event_vfu_tgt.so.2.0 00:06:08.026 SO libspdk_event_sock.so.4.0 00:06:08.026 LIB libspdk_event_vmd.a 00:06:08.026 SO libspdk_event_vhost_blk.so.2.0 00:06:08.026 SO libspdk_event_scheduler.so.3.0 00:06:08.026 SO libspdk_event_iobuf.so.2.0 00:06:08.026 SO libspdk_event_vmd.so.5.0 00:06:08.026 SYMLINK libspdk_event_sock.so 00:06:08.026 SYMLINK libspdk_event_vfu_tgt.so 00:06:08.026 SYMLINK libspdk_event_vhost_blk.so 00:06:08.026 SYMLINK libspdk_event_scheduler.so 00:06:08.026 SYMLINK libspdk_event_iobuf.so 00:06:08.288 SYMLINK libspdk_event_vmd.so 00:06:08.288 CC module/event/subsystems/accel/accel.o 00:06:08.548 LIB libspdk_event_accel.a 00:06:08.548 SO libspdk_event_accel.so.5.0 00:06:08.548 SYMLINK libspdk_event_accel.so 00:06:08.805 CC module/event/subsystems/bdev/bdev.o 00:06:09.063 LIB libspdk_event_bdev.a 00:06:09.063 SO libspdk_event_bdev.so.5.0 00:06:09.321 SYMLINK libspdk_event_bdev.so 00:06:09.321 CC module/event/subsystems/scsi/scsi.o 00:06:09.321 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:09.321 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:09.321 CC module/event/subsystems/ublk/ublk.o 00:06:09.321 CC module/event/subsystems/nbd/nbd.o 00:06:09.579 LIB libspdk_event_ublk.a 00:06:09.579 SO libspdk_event_ublk.so.2.0 00:06:09.579 LIB libspdk_event_scsi.a 00:06:09.579 LIB libspdk_event_nbd.a 00:06:09.579 SO libspdk_event_scsi.so.5.0 00:06:09.579 SO libspdk_event_nbd.so.5.0 00:06:09.579 SYMLINK libspdk_event_ublk.so 00:06:09.579 LIB libspdk_event_nvmf.a 00:06:09.579 SYMLINK libspdk_event_scsi.so 00:06:09.579 SYMLINK libspdk_event_nbd.so 00:06:09.579 SO libspdk_event_nvmf.so.5.0 00:06:09.837 SYMLINK libspdk_event_nvmf.so 00:06:09.837 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:09.837 CC module/event/subsystems/iscsi/iscsi.o 00:06:10.095 LIB libspdk_event_vhost_scsi.a 00:06:10.095 LIB libspdk_event_iscsi.a 00:06:10.095 SO libspdk_event_vhost_scsi.so.2.0 00:06:10.095 SO libspdk_event_iscsi.so.5.0 00:06:10.095 SYMLINK libspdk_event_vhost_scsi.so 00:06:10.095 SYMLINK libspdk_event_iscsi.so 00:06:10.095 SO libspdk.so.5.0 00:06:10.095 SYMLINK libspdk.so 00:06:10.352 CXX app/trace/trace.o 00:06:10.353 CC examples/nvme/hello_world/hello_world.o 00:06:10.353 CC examples/sock/hello_world/hello_sock.o 00:06:10.353 CC examples/vmd/lsvmd/lsvmd.o 00:06:10.353 CC examples/ioat/perf/perf.o 00:06:10.353 CC examples/accel/perf/accel_perf.o 00:06:10.610 CC examples/blob/hello_world/hello_blob.o 00:06:10.610 CC examples/bdev/hello_world/hello_bdev.o 00:06:10.610 CC test/accel/dif/dif.o 00:06:10.610 CC examples/nvmf/nvmf/nvmf.o 00:06:10.610 LINK lsvmd 00:06:10.610 LINK ioat_perf 00:06:10.610 LINK hello_sock 00:06:10.610 LINK hello_world 00:06:10.610 LINK hello_blob 00:06:10.868 LINK hello_bdev 00:06:10.868 CC examples/vmd/led/led.o 00:06:10.868 LINK nvmf 00:06:10.868 LINK spdk_trace 00:06:10.868 CC examples/ioat/verify/verify.o 00:06:10.868 CC examples/bdev/bdevperf/bdevperf.o 00:06:10.868 LINK accel_perf 00:06:10.868 CC examples/nvme/reconnect/reconnect.o 00:06:11.126 LINK led 00:06:11.126 LINK dif 00:06:11.126 CC examples/blob/cli/blobcli.o 00:06:11.126 CC examples/util/zipf/zipf.o 00:06:11.126 CC app/trace_record/trace_record.o 00:06:11.126 LINK verify 00:06:11.385 CC examples/thread/thread/thread_ex.o 00:06:11.385 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:11.385 LINK reconnect 00:06:11.385 LINK zipf 00:06:11.385 CC test/app/bdev_svc/bdev_svc.o 00:06:11.385 CC test/bdev/bdevio/bdevio.o 00:06:11.385 LINK spdk_trace_record 00:06:11.643 CC app/nvmf_tgt/nvmf_main.o 00:06:11.643 LINK thread 00:06:11.643 LINK bdev_svc 00:06:11.643 LINK blobcli 00:06:11.643 CC app/iscsi_tgt/iscsi_tgt.o 00:06:11.643 CC app/spdk_lspci/spdk_lspci.o 00:06:11.643 CC app/spdk_tgt/spdk_tgt.o 00:06:11.643 LINK bdevperf 00:06:11.643 LINK nvmf_tgt 00:06:11.902 LINK spdk_lspci 00:06:11.902 LINK nvme_manage 00:06:11.902 LINK bdevio 00:06:11.902 LINK iscsi_tgt 00:06:11.902 CC app/spdk_nvme_perf/perf.o 00:06:11.902 LINK spdk_tgt 00:06:11.902 CC app/spdk_nvme_identify/identify.o 00:06:11.902 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:12.160 CC examples/nvme/arbitration/arbitration.o 00:06:12.160 CC examples/nvme/hotplug/hotplug.o 00:06:12.160 CC test/app/histogram_perf/histogram_perf.o 00:06:12.160 TEST_HEADER include/spdk/accel.h 00:06:12.160 TEST_HEADER include/spdk/accel_module.h 00:06:12.160 TEST_HEADER include/spdk/assert.h 00:06:12.160 CC test/app/jsoncat/jsoncat.o 00:06:12.160 TEST_HEADER include/spdk/barrier.h 00:06:12.160 TEST_HEADER include/spdk/base64.h 00:06:12.160 TEST_HEADER include/spdk/bdev.h 00:06:12.160 TEST_HEADER include/spdk/bdev_module.h 00:06:12.160 TEST_HEADER include/spdk/bdev_zone.h 00:06:12.160 TEST_HEADER include/spdk/bit_array.h 00:06:12.160 TEST_HEADER include/spdk/bit_pool.h 00:06:12.160 TEST_HEADER include/spdk/blob_bdev.h 00:06:12.160 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:12.160 TEST_HEADER include/spdk/blobfs.h 00:06:12.160 TEST_HEADER include/spdk/blob.h 00:06:12.160 TEST_HEADER include/spdk/conf.h 00:06:12.160 TEST_HEADER include/spdk/config.h 00:06:12.160 TEST_HEADER include/spdk/cpuset.h 00:06:12.160 TEST_HEADER include/spdk/crc16.h 00:06:12.160 TEST_HEADER include/spdk/crc32.h 00:06:12.160 TEST_HEADER include/spdk/crc64.h 00:06:12.160 CC test/blobfs/mkfs/mkfs.o 00:06:12.160 TEST_HEADER include/spdk/dif.h 00:06:12.160 TEST_HEADER include/spdk/dma.h 00:06:12.160 TEST_HEADER include/spdk/endian.h 00:06:12.160 TEST_HEADER include/spdk/env_dpdk.h 00:06:12.160 TEST_HEADER include/spdk/env.h 00:06:12.160 TEST_HEADER include/spdk/event.h 00:06:12.160 TEST_HEADER include/spdk/fd_group.h 00:06:12.160 TEST_HEADER include/spdk/fd.h 00:06:12.160 CC test/app/stub/stub.o 00:06:12.160 TEST_HEADER include/spdk/file.h 00:06:12.160 TEST_HEADER include/spdk/ftl.h 00:06:12.160 TEST_HEADER include/spdk/gpt_spec.h 00:06:12.160 TEST_HEADER include/spdk/hexlify.h 00:06:12.160 TEST_HEADER include/spdk/histogram_data.h 00:06:12.160 TEST_HEADER include/spdk/idxd.h 00:06:12.160 TEST_HEADER include/spdk/idxd_spec.h 00:06:12.160 TEST_HEADER include/spdk/init.h 00:06:12.160 TEST_HEADER include/spdk/ioat.h 00:06:12.160 TEST_HEADER include/spdk/ioat_spec.h 00:06:12.160 TEST_HEADER include/spdk/iscsi_spec.h 00:06:12.160 TEST_HEADER include/spdk/json.h 00:06:12.160 TEST_HEADER include/spdk/jsonrpc.h 00:06:12.160 TEST_HEADER include/spdk/likely.h 00:06:12.160 TEST_HEADER include/spdk/log.h 00:06:12.160 TEST_HEADER include/spdk/lvol.h 00:06:12.160 TEST_HEADER include/spdk/memory.h 00:06:12.160 LINK histogram_perf 00:06:12.160 TEST_HEADER include/spdk/mmio.h 00:06:12.419 TEST_HEADER include/spdk/nbd.h 00:06:12.419 TEST_HEADER include/spdk/notify.h 00:06:12.419 TEST_HEADER include/spdk/nvme.h 00:06:12.419 TEST_HEADER include/spdk/nvme_intel.h 00:06:12.419 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:12.419 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:12.419 TEST_HEADER include/spdk/nvme_spec.h 00:06:12.419 TEST_HEADER include/spdk/nvme_zns.h 00:06:12.419 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:12.419 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:12.419 TEST_HEADER include/spdk/nvmf.h 00:06:12.419 TEST_HEADER include/spdk/nvmf_spec.h 00:06:12.419 TEST_HEADER include/spdk/nvmf_transport.h 00:06:12.419 TEST_HEADER include/spdk/opal.h 00:06:12.419 TEST_HEADER include/spdk/opal_spec.h 00:06:12.419 TEST_HEADER include/spdk/pci_ids.h 00:06:12.419 TEST_HEADER include/spdk/pipe.h 00:06:12.419 TEST_HEADER include/spdk/queue.h 00:06:12.419 TEST_HEADER include/spdk/reduce.h 00:06:12.419 TEST_HEADER include/spdk/rpc.h 00:06:12.419 TEST_HEADER include/spdk/scheduler.h 00:06:12.419 TEST_HEADER include/spdk/scsi.h 00:06:12.419 LINK jsoncat 00:06:12.419 TEST_HEADER include/spdk/scsi_spec.h 00:06:12.419 TEST_HEADER include/spdk/sock.h 00:06:12.419 TEST_HEADER include/spdk/stdinc.h 00:06:12.419 TEST_HEADER include/spdk/string.h 00:06:12.419 TEST_HEADER include/spdk/thread.h 00:06:12.419 TEST_HEADER include/spdk/trace.h 00:06:12.419 TEST_HEADER include/spdk/trace_parser.h 00:06:12.419 TEST_HEADER include/spdk/tree.h 00:06:12.419 TEST_HEADER include/spdk/ublk.h 00:06:12.419 TEST_HEADER include/spdk/util.h 00:06:12.419 TEST_HEADER include/spdk/uuid.h 00:06:12.419 TEST_HEADER include/spdk/version.h 00:06:12.419 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:12.419 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:12.419 TEST_HEADER include/spdk/vhost.h 00:06:12.419 LINK hotplug 00:06:12.419 TEST_HEADER include/spdk/vmd.h 00:06:12.419 TEST_HEADER include/spdk/xor.h 00:06:12.419 TEST_HEADER include/spdk/zipf.h 00:06:12.419 CXX test/cpp_headers/accel.o 00:06:12.419 LINK nvme_fuzz 00:06:12.419 LINK stub 00:06:12.419 LINK mkfs 00:06:12.677 LINK arbitration 00:06:12.677 CXX test/cpp_headers/accel_module.o 00:06:12.677 CC test/dma/test_dma/test_dma.o 00:06:12.677 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:12.677 CC test/env/vtophys/vtophys.o 00:06:12.677 CC test/env/mem_callbacks/mem_callbacks.o 00:06:12.677 CC test/event/event_perf/event_perf.o 00:06:12.677 CXX test/cpp_headers/assert.o 00:06:12.677 LINK spdk_nvme_identify 00:06:12.936 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:12.936 LINK spdk_nvme_perf 00:06:12.936 LINK vtophys 00:06:12.936 LINK event_perf 00:06:12.936 CC test/lvol/esnap/esnap.o 00:06:12.936 CXX test/cpp_headers/barrier.o 00:06:12.936 LINK cmb_copy 00:06:13.194 CC examples/idxd/perf/perf.o 00:06:13.194 LINK test_dma 00:06:13.194 CC test/event/reactor/reactor.o 00:06:13.194 CC app/spdk_nvme_discover/discovery_aer.o 00:06:13.194 CC test/nvme/aer/aer.o 00:06:13.194 CXX test/cpp_headers/base64.o 00:06:13.194 CC examples/nvme/abort/abort.o 00:06:13.452 LINK reactor 00:06:13.452 LINK mem_callbacks 00:06:13.452 CC test/nvme/reset/reset.o 00:06:13.452 LINK spdk_nvme_discover 00:06:13.452 CXX test/cpp_headers/bdev.o 00:06:13.452 LINK idxd_perf 00:06:13.452 LINK aer 00:06:13.452 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:13.452 CC test/event/reactor_perf/reactor_perf.o 00:06:13.711 CC app/spdk_top/spdk_top.o 00:06:13.711 CXX test/cpp_headers/bdev_module.o 00:06:13.711 CXX test/cpp_headers/bdev_zone.o 00:06:13.711 LINK reactor_perf 00:06:13.711 LINK abort 00:06:13.711 LINK env_dpdk_post_init 00:06:13.711 LINK reset 00:06:13.711 CC app/vhost/vhost.o 00:06:13.970 CXX test/cpp_headers/bit_array.o 00:06:13.970 CC test/event/app_repeat/app_repeat.o 00:06:13.970 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:13.970 CC test/env/memory/memory_ut.o 00:06:13.970 CC test/nvme/sgl/sgl.o 00:06:13.970 LINK vhost 00:06:13.970 CXX test/cpp_headers/bit_pool.o 00:06:14.227 LINK app_repeat 00:06:14.227 LINK pmr_persistence 00:06:14.227 LINK sgl 00:06:14.227 CXX test/cpp_headers/blob_bdev.o 00:06:14.485 LINK iscsi_fuzz 00:06:14.485 CC test/env/pci/pci_ut.o 00:06:14.485 CC test/event/scheduler/scheduler.o 00:06:14.485 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:14.485 CC test/nvme/e2edp/nvme_dp.o 00:06:14.485 CXX test/cpp_headers/blobfs_bdev.o 00:06:14.485 LINK spdk_top 00:06:14.485 LINK interrupt_tgt 00:06:14.743 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:14.743 LINK scheduler 00:06:14.743 CXX test/cpp_headers/blobfs.o 00:06:14.743 CC app/spdk_dd/spdk_dd.o 00:06:14.743 LINK nvme_dp 00:06:14.743 LINK pci_ut 00:06:14.743 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:14.743 CC test/nvme/overhead/overhead.o 00:06:14.743 CXX test/cpp_headers/blob.o 00:06:15.001 LINK memory_ut 00:06:15.001 CC test/rpc_client/rpc_client_test.o 00:06:15.001 CXX test/cpp_headers/conf.o 00:06:15.258 CXX test/cpp_headers/config.o 00:06:15.258 CXX test/cpp_headers/cpuset.o 00:06:15.258 LINK overhead 00:06:15.258 CC test/thread/poller_perf/poller_perf.o 00:06:15.258 LINK spdk_dd 00:06:15.516 CC test/nvme/err_injection/err_injection.o 00:06:15.516 LINK rpc_client_test 00:06:15.516 CC app/fio/nvme/fio_plugin.o 00:06:15.516 LINK vhost_fuzz 00:06:15.516 CXX test/cpp_headers/crc16.o 00:06:15.516 LINK poller_perf 00:06:15.516 CXX test/cpp_headers/crc32.o 00:06:15.516 CXX test/cpp_headers/crc64.o 00:06:15.516 CXX test/cpp_headers/dif.o 00:06:15.516 LINK err_injection 00:06:15.781 CXX test/cpp_headers/dma.o 00:06:15.781 CC test/nvme/reserve/reserve.o 00:06:15.781 CC test/nvme/startup/startup.o 00:06:15.781 CC app/fio/bdev/fio_plugin.o 00:06:15.781 CC test/nvme/connect_stress/connect_stress.o 00:06:15.781 CC test/nvme/simple_copy/simple_copy.o 00:06:15.781 CC test/nvme/boot_partition/boot_partition.o 00:06:15.781 CXX test/cpp_headers/endian.o 00:06:16.039 LINK reserve 00:06:16.039 LINK startup 00:06:16.039 LINK spdk_nvme 00:06:16.039 LINK boot_partition 00:06:16.039 LINK connect_stress 00:06:16.039 LINK simple_copy 00:06:16.039 CXX test/cpp_headers/env_dpdk.o 00:06:16.039 CC test/nvme/compliance/nvme_compliance.o 00:06:16.297 CC test/nvme/fused_ordering/fused_ordering.o 00:06:16.297 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:16.297 CXX test/cpp_headers/env.o 00:06:16.297 CXX test/cpp_headers/event.o 00:06:16.297 CC test/nvme/fdp/fdp.o 00:06:16.297 CC test/nvme/cuse/cuse.o 00:06:16.297 LINK spdk_bdev 00:06:16.554 LINK fused_ordering 00:06:16.554 LINK doorbell_aers 00:06:16.554 CXX test/cpp_headers/fd_group.o 00:06:16.554 CXX test/cpp_headers/fd.o 00:06:16.554 CXX test/cpp_headers/file.o 00:06:16.554 LINK nvme_compliance 00:06:16.554 CXX test/cpp_headers/ftl.o 00:06:16.554 CXX test/cpp_headers/gpt_spec.o 00:06:16.811 CXX test/cpp_headers/hexlify.o 00:06:16.811 CXX test/cpp_headers/histogram_data.o 00:06:16.811 LINK fdp 00:06:16.811 CXX test/cpp_headers/idxd.o 00:06:16.811 CXX test/cpp_headers/idxd_spec.o 00:06:16.811 CXX test/cpp_headers/init.o 00:06:16.811 CXX test/cpp_headers/ioat.o 00:06:16.811 CXX test/cpp_headers/ioat_spec.o 00:06:17.068 CXX test/cpp_headers/iscsi_spec.o 00:06:17.068 CXX test/cpp_headers/json.o 00:06:17.068 CXX test/cpp_headers/jsonrpc.o 00:06:17.068 CXX test/cpp_headers/likely.o 00:06:17.068 CXX test/cpp_headers/log.o 00:06:17.068 CXX test/cpp_headers/lvol.o 00:06:17.325 CXX test/cpp_headers/memory.o 00:06:17.325 CXX test/cpp_headers/mmio.o 00:06:17.325 CXX test/cpp_headers/nbd.o 00:06:17.325 CXX test/cpp_headers/notify.o 00:06:17.325 CXX test/cpp_headers/nvme.o 00:06:17.325 CXX test/cpp_headers/nvme_intel.o 00:06:17.325 CXX test/cpp_headers/nvme_ocssd.o 00:06:17.325 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:17.325 CXX test/cpp_headers/nvme_spec.o 00:06:17.325 CXX test/cpp_headers/nvme_zns.o 00:06:17.583 CXX test/cpp_headers/nvmf_cmd.o 00:06:17.583 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:17.583 CXX test/cpp_headers/nvmf.o 00:06:17.583 CXX test/cpp_headers/nvmf_spec.o 00:06:17.583 CXX test/cpp_headers/nvmf_transport.o 00:06:17.583 CXX test/cpp_headers/opal.o 00:06:17.583 CXX test/cpp_headers/opal_spec.o 00:06:17.583 LINK cuse 00:06:17.583 CXX test/cpp_headers/pci_ids.o 00:06:17.583 CXX test/cpp_headers/pipe.o 00:06:17.841 CXX test/cpp_headers/queue.o 00:06:17.841 CXX test/cpp_headers/reduce.o 00:06:17.841 CXX test/cpp_headers/rpc.o 00:06:17.841 CXX test/cpp_headers/scheduler.o 00:06:17.841 CXX test/cpp_headers/scsi.o 00:06:17.841 CXX test/cpp_headers/scsi_spec.o 00:06:17.841 CXX test/cpp_headers/sock.o 00:06:17.841 CXX test/cpp_headers/stdinc.o 00:06:17.841 CXX test/cpp_headers/string.o 00:06:18.099 CXX test/cpp_headers/thread.o 00:06:18.099 CXX test/cpp_headers/trace.o 00:06:18.099 CXX test/cpp_headers/trace_parser.o 00:06:18.357 CXX test/cpp_headers/tree.o 00:06:18.357 CXX test/cpp_headers/ublk.o 00:06:18.357 CXX test/cpp_headers/util.o 00:06:18.357 CXX test/cpp_headers/uuid.o 00:06:18.357 CXX test/cpp_headers/version.o 00:06:18.357 CXX test/cpp_headers/vfio_user_pci.o 00:06:18.357 CXX test/cpp_headers/vfio_user_spec.o 00:06:18.357 LINK esnap 00:06:18.614 CXX test/cpp_headers/vhost.o 00:06:18.614 CXX test/cpp_headers/vmd.o 00:06:18.614 CXX test/cpp_headers/xor.o 00:06:18.614 CXX test/cpp_headers/zipf.o 00:06:21.892 00:06:21.892 real 1m7.628s 00:06:21.892 user 7m4.666s 00:06:21.892 sys 1m39.110s 00:06:21.892 14:20:28 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:06:21.892 14:20:28 -- common/autotest_common.sh@10 -- $ set +x 00:06:21.892 ************************************ 00:06:21.893 END TEST make 00:06:21.893 ************************************ 00:06:21.893 14:20:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:21.893 14:20:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:21.893 14:20:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:21.893 14:20:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:21.893 14:20:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:21.893 14:20:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:21.893 14:20:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:21.893 14:20:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:21.893 14:20:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:21.893 14:20:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.893 14:20:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:21.893 14:20:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:21.893 14:20:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:21.893 14:20:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:21.893 14:20:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:21.893 14:20:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:21.893 14:20:28 -- scripts/common.sh@344 -- # : 1 00:06:21.893 14:20:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:21.893 14:20:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.893 14:20:28 -- scripts/common.sh@364 -- # decimal 1 00:06:21.893 14:20:28 -- scripts/common.sh@352 -- # local d=1 00:06:21.893 14:20:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.893 14:20:28 -- scripts/common.sh@354 -- # echo 1 00:06:21.893 14:20:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:21.893 14:20:28 -- scripts/common.sh@365 -- # decimal 2 00:06:21.893 14:20:28 -- scripts/common.sh@352 -- # local d=2 00:06:21.893 14:20:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.893 14:20:28 -- scripts/common.sh@354 -- # echo 2 00:06:21.893 14:20:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:21.893 14:20:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:21.893 14:20:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:21.893 14:20:28 -- scripts/common.sh@367 -- # return 0 00:06:21.893 14:20:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.893 14:20:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:21.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.893 --rc genhtml_branch_coverage=1 00:06:21.893 --rc genhtml_function_coverage=1 00:06:21.893 --rc genhtml_legend=1 00:06:21.893 --rc geninfo_all_blocks=1 00:06:21.893 --rc geninfo_unexecuted_blocks=1 00:06:21.893 00:06:21.893 ' 00:06:21.893 14:20:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:21.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.893 --rc genhtml_branch_coverage=1 00:06:21.893 --rc genhtml_function_coverage=1 00:06:21.893 --rc genhtml_legend=1 00:06:21.893 --rc geninfo_all_blocks=1 00:06:21.893 --rc geninfo_unexecuted_blocks=1 00:06:21.893 00:06:21.893 ' 00:06:21.893 14:20:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:21.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.893 --rc genhtml_branch_coverage=1 00:06:21.893 --rc genhtml_function_coverage=1 00:06:21.893 --rc genhtml_legend=1 00:06:21.893 --rc geninfo_all_blocks=1 00:06:21.893 --rc geninfo_unexecuted_blocks=1 00:06:21.893 00:06:21.893 ' 00:06:21.893 14:20:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:21.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.893 --rc genhtml_branch_coverage=1 00:06:21.893 --rc genhtml_function_coverage=1 00:06:21.893 --rc genhtml_legend=1 00:06:21.893 --rc geninfo_all_blocks=1 00:06:21.893 --rc geninfo_unexecuted_blocks=1 00:06:21.893 00:06:21.893 ' 00:06:21.893 14:20:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.893 14:20:28 -- nvmf/common.sh@7 -- # uname -s 00:06:21.893 14:20:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.893 14:20:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.893 14:20:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.893 14:20:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.893 14:20:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.893 14:20:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.893 14:20:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.893 14:20:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.893 14:20:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.893 14:20:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.893 14:20:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:06:21.893 14:20:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:06:21.893 14:20:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.893 14:20:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.893 14:20:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:21.893 14:20:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.893 14:20:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.893 14:20:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.893 14:20:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.893 14:20:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.893 14:20:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.893 14:20:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.893 14:20:28 -- paths/export.sh@5 -- # export PATH 00:06:21.893 14:20:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.893 14:20:28 -- nvmf/common.sh@46 -- # : 0 00:06:21.893 14:20:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:21.893 14:20:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:21.893 14:20:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:21.893 14:20:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.893 14:20:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.893 14:20:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:21.893 14:20:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:21.893 14:20:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:21.893 14:20:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:21.893 14:20:28 -- spdk/autotest.sh@32 -- # uname -s 00:06:21.893 14:20:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:21.893 14:20:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:21.893 14:20:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:21.893 14:20:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:21.893 14:20:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:21.893 14:20:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:21.893 14:20:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:21.893 14:20:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:21.893 14:20:28 -- spdk/autotest.sh@48 -- # udevadm_pid=49762 00:06:21.893 14:20:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:21.893 14:20:28 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:06:21.893 14:20:28 -- spdk/autotest.sh@54 -- # echo 49771 00:06:21.893 14:20:28 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:06:21.893 14:20:28 -- spdk/autotest.sh@56 -- # echo 49774 00:06:21.893 14:20:28 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:06:21.893 14:20:28 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:06:21.893 14:20:28 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:21.893 14:20:28 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:06:21.893 14:20:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.893 14:20:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.893 14:20:28 -- spdk/autotest.sh@70 -- # create_test_list 00:06:21.893 14:20:28 -- common/autotest_common.sh@746 -- # xtrace_disable 00:06:21.893 14:20:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.893 14:20:28 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:21.893 14:20:28 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:21.893 14:20:28 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:06:21.893 14:20:28 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:21.893 14:20:28 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:06:21.893 14:20:28 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:06:21.893 14:20:28 -- common/autotest_common.sh@1450 -- # uname 00:06:21.893 14:20:28 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:06:21.893 14:20:28 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:06:21.893 14:20:28 -- common/autotest_common.sh@1470 -- # uname 00:06:21.893 14:20:28 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:06:21.893 14:20:28 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:06:21.893 14:20:28 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:21.893 lcov: LCOV version 1.15 00:06:21.893 14:20:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:31.882 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:31.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:31.882 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:31.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:31.882 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:31.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:58.453 14:21:03 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:06:58.453 14:21:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.453 14:21:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.453 14:21:03 -- spdk/autotest.sh@89 -- # rm -f 00:06:58.453 14:21:03 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:58.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.453 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:06:58.453 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:06:58.453 14:21:04 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:06:58.453 14:21:04 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:06:58.453 14:21:04 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:06:58.453 14:21:04 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:06:58.453 14:21:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:06:58.453 14:21:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:06:58.453 14:21:04 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:06:58.453 14:21:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:06:58.453 14:21:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:06:58.453 14:21:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:06:58.453 14:21:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:06:58.453 14:21:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:06:58.453 14:21:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:06:58.453 14:21:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:06:58.453 14:21:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:06:58.453 14:21:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:06:58.453 14:21:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:58.453 14:21:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:06:58.453 14:21:04 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:06:58.453 14:21:04 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:06:58.453 14:21:04 -- spdk/autotest.sh@108 -- # grep -v p 00:06:58.453 14:21:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:58.453 14:21:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:06:58.453 14:21:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:06:58.453 14:21:04 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:06:58.453 14:21:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:58.453 No valid GPT data, bailing 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # pt= 00:06:58.453 14:21:04 -- scripts/common.sh@394 -- # return 1 00:06:58.453 14:21:04 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:58.453 1+0 records in 00:06:58.453 1+0 records out 00:06:58.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044908 s, 233 MB/s 00:06:58.453 14:21:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:58.453 14:21:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:06:58.453 14:21:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:06:58.453 14:21:04 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:06:58.453 14:21:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:58.453 No valid GPT data, bailing 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # pt= 00:06:58.453 14:21:04 -- scripts/common.sh@394 -- # return 1 00:06:58.453 14:21:04 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:58.453 1+0 records in 00:06:58.453 1+0 records out 00:06:58.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405644 s, 258 MB/s 00:06:58.453 14:21:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:58.453 14:21:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:06:58.453 14:21:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:06:58.453 14:21:04 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:06:58.453 14:21:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:58.453 No valid GPT data, bailing 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # pt= 00:06:58.453 14:21:04 -- scripts/common.sh@394 -- # return 1 00:06:58.453 14:21:04 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:58.453 1+0 records in 00:06:58.453 1+0 records out 00:06:58.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00314886 s, 333 MB/s 00:06:58.453 14:21:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:58.453 14:21:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:06:58.453 14:21:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:06:58.453 14:21:04 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:06:58.453 14:21:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:58.453 No valid GPT data, bailing 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:58.453 14:21:04 -- scripts/common.sh@393 -- # pt= 00:06:58.453 14:21:04 -- scripts/common.sh@394 -- # return 1 00:06:58.453 14:21:04 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:58.453 1+0 records in 00:06:58.453 1+0 records out 00:06:58.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036199 s, 290 MB/s 00:06:58.453 14:21:04 -- spdk/autotest.sh@116 -- # sync 00:06:58.453 14:21:04 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:58.453 14:21:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:58.453 14:21:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:59.826 14:21:06 -- spdk/autotest.sh@122 -- # uname -s 00:06:59.826 14:21:06 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:06:59.826 14:21:06 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:59.826 14:21:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.826 14:21:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.826 14:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.826 ************************************ 00:06:59.826 START TEST setup.sh 00:06:59.826 ************************************ 00:06:59.826 14:21:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:59.826 * Looking for test storage... 00:06:59.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:59.826 14:21:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:59.826 14:21:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:59.826 14:21:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:59.826 14:21:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:59.826 14:21:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:59.826 14:21:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:59.826 14:21:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:59.826 14:21:06 -- scripts/common.sh@335 -- # IFS=.-: 00:06:59.826 14:21:06 -- scripts/common.sh@335 -- # read -ra ver1 00:06:59.826 14:21:06 -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.826 14:21:06 -- scripts/common.sh@336 -- # read -ra ver2 00:06:59.826 14:21:06 -- scripts/common.sh@337 -- # local 'op=<' 00:06:59.826 14:21:06 -- scripts/common.sh@339 -- # ver1_l=2 00:06:59.826 14:21:06 -- scripts/common.sh@340 -- # ver2_l=1 00:06:59.826 14:21:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:59.826 14:21:06 -- scripts/common.sh@343 -- # case "$op" in 00:06:59.827 14:21:06 -- scripts/common.sh@344 -- # : 1 00:06:59.827 14:21:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:59.827 14:21:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.827 14:21:06 -- scripts/common.sh@364 -- # decimal 1 00:06:59.827 14:21:06 -- scripts/common.sh@352 -- # local d=1 00:06:59.827 14:21:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.827 14:21:06 -- scripts/common.sh@354 -- # echo 1 00:06:59.827 14:21:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:59.827 14:21:06 -- scripts/common.sh@365 -- # decimal 2 00:06:59.827 14:21:06 -- scripts/common.sh@352 -- # local d=2 00:06:59.827 14:21:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.827 14:21:06 -- scripts/common.sh@354 -- # echo 2 00:06:59.827 14:21:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:59.827 14:21:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:59.827 14:21:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:59.827 14:21:06 -- scripts/common.sh@367 -- # return 0 00:06:59.827 14:21:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.827 14:21:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:59.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.827 --rc genhtml_branch_coverage=1 00:06:59.827 --rc genhtml_function_coverage=1 00:06:59.827 --rc genhtml_legend=1 00:06:59.827 --rc geninfo_all_blocks=1 00:06:59.827 --rc geninfo_unexecuted_blocks=1 00:06:59.827 00:06:59.827 ' 00:06:59.827 14:21:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:59.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.827 --rc genhtml_branch_coverage=1 00:06:59.827 --rc genhtml_function_coverage=1 00:06:59.827 --rc genhtml_legend=1 00:06:59.827 --rc geninfo_all_blocks=1 00:06:59.827 --rc geninfo_unexecuted_blocks=1 00:06:59.827 00:06:59.827 ' 00:06:59.827 14:21:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:59.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.827 --rc genhtml_branch_coverage=1 00:06:59.827 --rc genhtml_function_coverage=1 00:06:59.827 --rc genhtml_legend=1 00:06:59.827 --rc geninfo_all_blocks=1 00:06:59.827 --rc geninfo_unexecuted_blocks=1 00:06:59.827 00:06:59.827 ' 00:06:59.827 14:21:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:59.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.827 --rc genhtml_branch_coverage=1 00:06:59.827 --rc genhtml_function_coverage=1 00:06:59.827 --rc genhtml_legend=1 00:06:59.827 --rc geninfo_all_blocks=1 00:06:59.827 --rc geninfo_unexecuted_blocks=1 00:06:59.827 00:06:59.827 ' 00:06:59.827 14:21:06 -- setup/test-setup.sh@10 -- # uname -s 00:06:59.827 14:21:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:59.827 14:21:06 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:59.827 14:21:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.827 14:21:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.827 14:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.827 ************************************ 00:06:59.827 START TEST acl 00:06:59.827 ************************************ 00:06:59.827 14:21:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:59.827 * Looking for test storage... 00:07:00.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:00.085 14:21:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:00.085 14:21:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:00.085 14:21:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:00.085 14:21:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:00.085 14:21:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:00.085 14:21:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:00.085 14:21:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:00.085 14:21:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:00.085 14:21:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:00.085 14:21:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.085 14:21:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:00.085 14:21:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:00.085 14:21:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:00.085 14:21:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:00.085 14:21:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:00.085 14:21:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:00.085 14:21:06 -- scripts/common.sh@344 -- # : 1 00:07:00.085 14:21:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:00.085 14:21:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.085 14:21:06 -- scripts/common.sh@364 -- # decimal 1 00:07:00.085 14:21:06 -- scripts/common.sh@352 -- # local d=1 00:07:00.085 14:21:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.085 14:21:06 -- scripts/common.sh@354 -- # echo 1 00:07:00.085 14:21:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:00.085 14:21:06 -- scripts/common.sh@365 -- # decimal 2 00:07:00.085 14:21:06 -- scripts/common.sh@352 -- # local d=2 00:07:00.085 14:21:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.085 14:21:06 -- scripts/common.sh@354 -- # echo 2 00:07:00.085 14:21:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:00.085 14:21:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:00.085 14:21:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:00.085 14:21:06 -- scripts/common.sh@367 -- # return 0 00:07:00.085 14:21:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.085 14:21:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:00.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.085 --rc genhtml_branch_coverage=1 00:07:00.085 --rc genhtml_function_coverage=1 00:07:00.085 --rc genhtml_legend=1 00:07:00.085 --rc geninfo_all_blocks=1 00:07:00.085 --rc geninfo_unexecuted_blocks=1 00:07:00.085 00:07:00.085 ' 00:07:00.086 14:21:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:00.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.086 --rc genhtml_branch_coverage=1 00:07:00.086 --rc genhtml_function_coverage=1 00:07:00.086 --rc genhtml_legend=1 00:07:00.086 --rc geninfo_all_blocks=1 00:07:00.086 --rc geninfo_unexecuted_blocks=1 00:07:00.086 00:07:00.086 ' 00:07:00.086 14:21:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:00.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.086 --rc genhtml_branch_coverage=1 00:07:00.086 --rc genhtml_function_coverage=1 00:07:00.086 --rc genhtml_legend=1 00:07:00.086 --rc geninfo_all_blocks=1 00:07:00.086 --rc geninfo_unexecuted_blocks=1 00:07:00.086 00:07:00.086 ' 00:07:00.086 14:21:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:00.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.086 --rc genhtml_branch_coverage=1 00:07:00.086 --rc genhtml_function_coverage=1 00:07:00.086 --rc genhtml_legend=1 00:07:00.086 --rc geninfo_all_blocks=1 00:07:00.086 --rc geninfo_unexecuted_blocks=1 00:07:00.086 00:07:00.086 ' 00:07:00.086 14:21:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:07:00.086 14:21:06 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:07:00.086 14:21:06 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:07:00.086 14:21:06 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:07:00.086 14:21:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:00.086 14:21:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:07:00.086 14:21:06 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:07:00.086 14:21:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:00.086 14:21:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:07:00.086 14:21:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:07:00.086 14:21:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:00.086 14:21:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:07:00.086 14:21:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:07:00.086 14:21:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:00.086 14:21:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:07:00.086 14:21:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:07:00.086 14:21:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:00.086 14:21:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:00.086 14:21:06 -- setup/acl.sh@12 -- # devs=() 00:07:00.086 14:21:06 -- setup/acl.sh@12 -- # declare -a devs 00:07:00.086 14:21:06 -- setup/acl.sh@13 -- # drivers=() 00:07:00.086 14:21:06 -- setup/acl.sh@13 -- # declare -A drivers 00:07:00.086 14:21:06 -- setup/acl.sh@51 -- # setup reset 00:07:00.086 14:21:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:00.086 14:21:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:01.021 14:21:07 -- setup/acl.sh@52 -- # collect_setup_devs 00:07:01.021 14:21:07 -- setup/acl.sh@16 -- # local dev driver 00:07:01.021 14:21:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:01.021 14:21:07 -- setup/acl.sh@15 -- # setup output status 00:07:01.021 14:21:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:01.021 14:21:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:01.021 Hugepages 00:07:01.021 node hugesize free / total 00:07:01.021 14:21:07 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:01.021 14:21:07 -- setup/acl.sh@19 -- # continue 00:07:01.021 14:21:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:01.021 00:07:01.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:01.021 14:21:07 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:01.021 14:21:07 -- setup/acl.sh@19 -- # continue 00:07:01.021 14:21:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:01.021 14:21:07 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:07:01.021 14:21:07 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:07:01.021 14:21:07 -- setup/acl.sh@20 -- # continue 00:07:01.021 14:21:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:01.021 14:21:07 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:07:01.021 14:21:07 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:01.021 14:21:07 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:07:01.021 14:21:07 -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:01.021 14:21:07 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:01.021 14:21:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:01.279 14:21:08 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:07:01.279 14:21:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:01.279 14:21:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:07:01.279 14:21:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:01.279 14:21:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:01.279 14:21:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:01.279 14:21:08 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:07:01.279 14:21:08 -- setup/acl.sh@54 -- # run_test denied denied 00:07:01.279 14:21:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.279 14:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.279 14:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:01.279 ************************************ 00:07:01.279 START TEST denied 00:07:01.279 ************************************ 00:07:01.279 14:21:08 -- common/autotest_common.sh@1114 -- # denied 00:07:01.279 14:21:08 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:07:01.279 14:21:08 -- setup/acl.sh@38 -- # setup output config 00:07:01.279 14:21:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:01.279 14:21:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:01.279 14:21:08 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:07:02.258 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:07:02.258 14:21:08 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:07:02.258 14:21:08 -- setup/acl.sh@28 -- # local dev driver 00:07:02.258 14:21:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:07:02.258 14:21:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:07:02.258 14:21:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:07:02.258 14:21:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:02.258 14:21:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:02.258 14:21:08 -- setup/acl.sh@41 -- # setup reset 00:07:02.258 14:21:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:02.258 14:21:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.823 ************************************ 00:07:02.824 END TEST denied 00:07:02.824 ************************************ 00:07:02.824 00:07:02.824 real 0m1.414s 00:07:02.824 user 0m0.612s 00:07:02.824 sys 0m0.783s 00:07:02.824 14:21:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.824 14:21:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.824 14:21:09 -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:02.824 14:21:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.824 14:21:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.824 14:21:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.824 ************************************ 00:07:02.824 START TEST allowed 00:07:02.824 ************************************ 00:07:02.824 14:21:09 -- common/autotest_common.sh@1114 -- # allowed 00:07:02.824 14:21:09 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:07:02.824 14:21:09 -- setup/acl.sh@45 -- # setup output config 00:07:02.824 14:21:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:02.824 14:21:09 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:07:02.824 14:21:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:03.755 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.755 14:21:10 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:07:03.755 14:21:10 -- setup/acl.sh@28 -- # local dev driver 00:07:03.755 14:21:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:07:03.755 14:21:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:07:03.755 14:21:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:07:03.755 14:21:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:03.755 14:21:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:03.755 14:21:10 -- setup/acl.sh@48 -- # setup reset 00:07:03.755 14:21:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:03.755 14:21:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:04.321 00:07:04.321 real 0m1.571s 00:07:04.321 user 0m0.727s 00:07:04.321 sys 0m0.845s 00:07:04.321 ************************************ 00:07:04.321 14:21:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.321 14:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.321 END TEST allowed 00:07:04.321 ************************************ 00:07:04.321 ************************************ 00:07:04.321 END TEST acl 00:07:04.321 ************************************ 00:07:04.321 00:07:04.321 real 0m4.416s 00:07:04.321 user 0m2.035s 00:07:04.321 sys 0m2.396s 00:07:04.321 14:21:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.321 14:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.321 14:21:11 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:04.321 14:21:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.321 14:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.321 14:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.321 ************************************ 00:07:04.321 START TEST hugepages 00:07:04.321 ************************************ 00:07:04.321 14:21:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:04.321 * Looking for test storage... 00:07:04.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:04.321 14:21:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.321 14:21:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.321 14:21:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.580 14:21:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.580 14:21:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.580 14:21:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.580 14:21:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.580 14:21:11 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.580 14:21:11 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.580 14:21:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.580 14:21:11 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.580 14:21:11 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.580 14:21:11 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.580 14:21:11 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.580 14:21:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.580 14:21:11 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.580 14:21:11 -- scripts/common.sh@344 -- # : 1 00:07:04.580 14:21:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.580 14:21:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.580 14:21:11 -- scripts/common.sh@364 -- # decimal 1 00:07:04.580 14:21:11 -- scripts/common.sh@352 -- # local d=1 00:07:04.580 14:21:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.580 14:21:11 -- scripts/common.sh@354 -- # echo 1 00:07:04.580 14:21:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.580 14:21:11 -- scripts/common.sh@365 -- # decimal 2 00:07:04.580 14:21:11 -- scripts/common.sh@352 -- # local d=2 00:07:04.580 14:21:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.580 14:21:11 -- scripts/common.sh@354 -- # echo 2 00:07:04.580 14:21:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.580 14:21:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.580 14:21:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.580 14:21:11 -- scripts/common.sh@367 -- # return 0 00:07:04.580 14:21:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.580 14:21:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.580 --rc genhtml_branch_coverage=1 00:07:04.580 --rc genhtml_function_coverage=1 00:07:04.580 --rc genhtml_legend=1 00:07:04.580 --rc geninfo_all_blocks=1 00:07:04.580 --rc geninfo_unexecuted_blocks=1 00:07:04.580 00:07:04.580 ' 00:07:04.580 14:21:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.580 --rc genhtml_branch_coverage=1 00:07:04.580 --rc genhtml_function_coverage=1 00:07:04.580 --rc genhtml_legend=1 00:07:04.580 --rc geninfo_all_blocks=1 00:07:04.580 --rc geninfo_unexecuted_blocks=1 00:07:04.580 00:07:04.580 ' 00:07:04.580 14:21:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.580 --rc genhtml_branch_coverage=1 00:07:04.580 --rc genhtml_function_coverage=1 00:07:04.580 --rc genhtml_legend=1 00:07:04.580 --rc geninfo_all_blocks=1 00:07:04.580 --rc geninfo_unexecuted_blocks=1 00:07:04.580 00:07:04.580 ' 00:07:04.580 14:21:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.580 --rc genhtml_branch_coverage=1 00:07:04.580 --rc genhtml_function_coverage=1 00:07:04.580 --rc genhtml_legend=1 00:07:04.580 --rc geninfo_all_blocks=1 00:07:04.580 --rc geninfo_unexecuted_blocks=1 00:07:04.580 00:07:04.580 ' 00:07:04.580 14:21:11 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:04.580 14:21:11 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:04.580 14:21:11 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:04.580 14:21:11 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:04.581 14:21:11 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:04.581 14:21:11 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:04.581 14:21:11 -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:04.581 14:21:11 -- setup/common.sh@18 -- # local node= 00:07:04.581 14:21:11 -- setup/common.sh@19 -- # local var val 00:07:04.581 14:21:11 -- setup/common.sh@20 -- # local mem_f mem 00:07:04.581 14:21:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.581 14:21:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.581 14:21:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.581 14:21:11 -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.581 14:21:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 5839452 kB' 'MemAvailable: 7350496 kB' 'Buffers: 3704 kB' 'Cached: 1720720 kB' 'SwapCached: 0 kB' 'Active: 497492 kB' 'Inactive: 1344776 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50932 kB' 'Shmem: 10508 kB' 'KReclaimable: 68148 kB' 'Slab: 163960 kB' 'SReclaimable: 68148 kB' 'SUnreclaim: 95812 kB' 'KernelStack: 6528 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 322776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.581 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.581 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # continue 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.582 14:21:11 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.582 14:21:11 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:04.582 14:21:11 -- setup/common.sh@33 -- # echo 2048 00:07:04.582 14:21:11 -- setup/common.sh@33 -- # return 0 00:07:04.582 14:21:11 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:04.582 14:21:11 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:04.582 14:21:11 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:04.582 14:21:11 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:04.582 14:21:11 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:04.582 14:21:11 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:04.582 14:21:11 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:04.582 14:21:11 -- setup/hugepages.sh@207 -- # get_nodes 00:07:04.582 14:21:11 -- setup/hugepages.sh@27 -- # local node 00:07:04.582 14:21:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:04.582 14:21:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:04.582 14:21:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:04.582 14:21:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:04.582 14:21:11 -- setup/hugepages.sh@208 -- # clear_hp 00:07:04.582 14:21:11 -- setup/hugepages.sh@37 -- # local node hp 00:07:04.582 14:21:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:04.582 14:21:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:04.582 14:21:11 -- setup/hugepages.sh@41 -- # echo 0 00:07:04.582 14:21:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:04.582 14:21:11 -- setup/hugepages.sh@41 -- # echo 0 00:07:04.582 14:21:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:04.582 14:21:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:04.582 14:21:11 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:04.582 14:21:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.582 14:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.582 14:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.582 ************************************ 00:07:04.582 START TEST default_setup 00:07:04.582 ************************************ 00:07:04.582 14:21:11 -- common/autotest_common.sh@1114 -- # default_setup 00:07:04.582 14:21:11 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:04.582 14:21:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:04.582 14:21:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:04.582 14:21:11 -- setup/hugepages.sh@51 -- # shift 00:07:04.582 14:21:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:04.582 14:21:11 -- setup/hugepages.sh@52 -- # local node_ids 00:07:04.582 14:21:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:04.582 14:21:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:04.582 14:21:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:04.582 14:21:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:04.583 14:21:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:04.583 14:21:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:04.583 14:21:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:04.583 14:21:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:04.583 14:21:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:04.583 14:21:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:04.583 14:21:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:04.583 14:21:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:04.583 14:21:11 -- setup/hugepages.sh@73 -- # return 0 00:07:04.583 14:21:11 -- setup/hugepages.sh@137 -- # setup output 00:07:04.583 14:21:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:04.583 14:21:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.518 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.518 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.518 14:21:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:05.518 14:21:12 -- setup/hugepages.sh@89 -- # local node 00:07:05.518 14:21:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:05.518 14:21:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:05.518 14:21:12 -- setup/hugepages.sh@92 -- # local surp 00:07:05.518 14:21:12 -- setup/hugepages.sh@93 -- # local resv 00:07:05.518 14:21:12 -- setup/hugepages.sh@94 -- # local anon 00:07:05.518 14:21:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:05.518 14:21:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:05.518 14:21:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:05.518 14:21:12 -- setup/common.sh@18 -- # local node= 00:07:05.518 14:21:12 -- setup/common.sh@19 -- # local var val 00:07:05.518 14:21:12 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.518 14:21:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.518 14:21:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.518 14:21:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.518 14:21:12 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.518 14:21:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.518 14:21:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7961316 kB' 'MemAvailable: 9472212 kB' 'Buffers: 3704 kB' 'Cached: 1720708 kB' 'SwapCached: 0 kB' 'Active: 499156 kB' 'Inactive: 1344784 kB' 'Active(anon): 130016 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121204 kB' 'Mapped: 50900 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163636 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95800 kB' 'KernelStack: 6540 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.518 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.518 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.519 14:21:12 -- setup/common.sh@33 -- # echo 0 00:07:05.519 14:21:12 -- setup/common.sh@33 -- # return 0 00:07:05.519 14:21:12 -- setup/hugepages.sh@97 -- # anon=0 00:07:05.519 14:21:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:05.519 14:21:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:05.519 14:21:12 -- setup/common.sh@18 -- # local node= 00:07:05.519 14:21:12 -- setup/common.sh@19 -- # local var val 00:07:05.519 14:21:12 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.519 14:21:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.519 14:21:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.519 14:21:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.519 14:21:12 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.519 14:21:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7961068 kB' 'MemAvailable: 9471968 kB' 'Buffers: 3704 kB' 'Cached: 1720708 kB' 'SwapCached: 0 kB' 'Active: 499080 kB' 'Inactive: 1344788 kB' 'Active(anon): 129940 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 50776 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163616 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95780 kB' 'KernelStack: 6508 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.519 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.519 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.520 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.520 14:21:12 -- setup/common.sh@33 -- # echo 0 00:07:05.520 14:21:12 -- setup/common.sh@33 -- # return 0 00:07:05.520 14:21:12 -- setup/hugepages.sh@99 -- # surp=0 00:07:05.520 14:21:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:05.520 14:21:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:05.520 14:21:12 -- setup/common.sh@18 -- # local node= 00:07:05.520 14:21:12 -- setup/common.sh@19 -- # local var val 00:07:05.520 14:21:12 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.520 14:21:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.520 14:21:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.520 14:21:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.520 14:21:12 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.520 14:21:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.520 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7961068 kB' 'MemAvailable: 9471968 kB' 'Buffers: 3704 kB' 'Cached: 1720708 kB' 'SwapCached: 0 kB' 'Active: 498992 kB' 'Inactive: 1344788 kB' 'Active(anon): 129852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121028 kB' 'Mapped: 50776 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163628 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95792 kB' 'KernelStack: 6524 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.521 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.521 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.522 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.522 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.522 14:21:12 -- setup/common.sh@33 -- # echo 0 00:07:05.522 14:21:12 -- setup/common.sh@33 -- # return 0 00:07:05.781 14:21:12 -- setup/hugepages.sh@100 -- # resv=0 00:07:05.781 14:21:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:05.781 nr_hugepages=1024 00:07:05.781 resv_hugepages=0 00:07:05.781 surplus_hugepages=0 00:07:05.781 anon_hugepages=0 00:07:05.781 14:21:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:05.781 14:21:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:05.781 14:21:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:05.781 14:21:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:05.781 14:21:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:05.781 14:21:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:05.781 14:21:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:05.781 14:21:12 -- setup/common.sh@18 -- # local node= 00:07:05.781 14:21:12 -- setup/common.sh@19 -- # local var val 00:07:05.781 14:21:12 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.781 14:21:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.781 14:21:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.781 14:21:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.781 14:21:12 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.781 14:21:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.781 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7961068 kB' 'MemAvailable: 9471968 kB' 'Buffers: 3704 kB' 'Cached: 1720708 kB' 'SwapCached: 0 kB' 'Active: 498976 kB' 'Inactive: 1344788 kB' 'Active(anon): 129836 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121060 kB' 'Mapped: 50776 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163628 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95792 kB' 'KernelStack: 6540 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.782 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.782 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.783 14:21:12 -- setup/common.sh@33 -- # echo 1024 00:07:05.783 14:21:12 -- setup/common.sh@33 -- # return 0 00:07:05.783 14:21:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:05.783 14:21:12 -- setup/hugepages.sh@112 -- # get_nodes 00:07:05.783 14:21:12 -- setup/hugepages.sh@27 -- # local node 00:07:05.783 14:21:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:05.783 14:21:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:05.783 14:21:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:05.783 14:21:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:05.783 14:21:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:05.783 14:21:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:05.783 14:21:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:05.783 14:21:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:05.783 14:21:12 -- setup/common.sh@18 -- # local node=0 00:07:05.783 14:21:12 -- setup/common.sh@19 -- # local var val 00:07:05.783 14:21:12 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.783 14:21:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.783 14:21:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:05.783 14:21:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:05.783 14:21:12 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.783 14:21:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7961068 kB' 'MemUsed: 4278048 kB' 'SwapCached: 0 kB' 'Active: 498968 kB' 'Inactive: 1344788 kB' 'Active(anon): 129828 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1724412 kB' 'Mapped: 50776 kB' 'AnonPages: 121068 kB' 'Shmem: 10484 kB' 'KernelStack: 6540 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 163628 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.783 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.783 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # continue 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.784 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.784 14:21:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.784 14:21:12 -- setup/common.sh@33 -- # echo 0 00:07:05.784 14:21:12 -- setup/common.sh@33 -- # return 0 00:07:05.784 14:21:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:05.784 14:21:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:05.784 14:21:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:05.784 node0=1024 expecting 1024 00:07:05.784 ************************************ 00:07:05.784 END TEST default_setup 00:07:05.784 ************************************ 00:07:05.784 14:21:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:05.784 14:21:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:05.784 14:21:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:05.784 00:07:05.784 real 0m1.111s 00:07:05.784 user 0m0.538s 00:07:05.784 sys 0m0.484s 00:07:05.784 14:21:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.784 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.784 14:21:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:05.784 14:21:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.784 14:21:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.784 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.784 ************************************ 00:07:05.784 START TEST per_node_1G_alloc 00:07:05.784 ************************************ 00:07:05.784 14:21:12 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:07:05.784 14:21:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:07:05.784 14:21:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:07:05.784 14:21:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:07:05.784 14:21:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:05.784 14:21:12 -- setup/hugepages.sh@51 -- # shift 00:07:05.784 14:21:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:05.784 14:21:12 -- setup/hugepages.sh@52 -- # local node_ids 00:07:05.784 14:21:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:05.784 14:21:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:05.784 14:21:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:05.784 14:21:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:05.784 14:21:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:05.784 14:21:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:05.784 14:21:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:05.784 14:21:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:05.784 14:21:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:05.784 14:21:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:05.784 14:21:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:05.784 14:21:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:05.784 14:21:12 -- setup/hugepages.sh@73 -- # return 0 00:07:05.784 14:21:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:05.784 14:21:12 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:07:05.784 14:21:12 -- setup/hugepages.sh@146 -- # setup output 00:07:05.784 14:21:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:05.784 14:21:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:06.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.100 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:06.100 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:06.100 14:21:12 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:07:06.100 14:21:12 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:06.100 14:21:12 -- setup/hugepages.sh@89 -- # local node 00:07:06.100 14:21:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:06.100 14:21:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:06.100 14:21:12 -- setup/hugepages.sh@92 -- # local surp 00:07:06.100 14:21:12 -- setup/hugepages.sh@93 -- # local resv 00:07:06.100 14:21:12 -- setup/hugepages.sh@94 -- # local anon 00:07:06.100 14:21:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:06.100 14:21:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:06.100 14:21:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:06.100 14:21:12 -- setup/common.sh@18 -- # local node= 00:07:06.100 14:21:12 -- setup/common.sh@19 -- # local var val 00:07:06.100 14:21:12 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.100 14:21:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.100 14:21:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.100 14:21:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.100 14:21:12 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.100 14:21:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9013736 kB' 'MemAvailable: 10524640 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499364 kB' 'Inactive: 1344792 kB' 'Active(anon): 130224 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121308 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163628 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95792 kB' 'KernelStack: 6568 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.100 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.100 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:12 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.101 14:21:12 -- setup/common.sh@33 -- # echo 0 00:07:06.101 14:21:12 -- setup/common.sh@33 -- # return 0 00:07:06.101 14:21:12 -- setup/hugepages.sh@97 -- # anon=0 00:07:06.101 14:21:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:06.101 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:06.101 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.101 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.101 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.101 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.101 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.101 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.101 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.101 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9013736 kB' 'MemAvailable: 10524640 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 498984 kB' 'Inactive: 1344792 kB' 'Active(anon): 129844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120960 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163652 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95816 kB' 'KernelStack: 6544 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.101 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.101 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.102 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.102 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.363 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.363 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.363 14:21:13 -- setup/hugepages.sh@99 -- # surp=0 00:07:06.363 14:21:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:06.363 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:06.363 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.363 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.363 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.363 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.363 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.363 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.363 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.363 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9013736 kB' 'MemAvailable: 10524640 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499048 kB' 'Inactive: 1344792 kB' 'Active(anon): 129908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163648 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95812 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.363 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.363 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.364 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.364 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.365 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.365 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.365 14:21:13 -- setup/hugepages.sh@100 -- # resv=0 00:07:06.365 14:21:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:06.365 nr_hugepages=512 00:07:06.365 resv_hugepages=0 00:07:06.365 surplus_hugepages=0 00:07:06.365 anon_hugepages=0 00:07:06.365 14:21:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:06.365 14:21:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:06.365 14:21:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:06.365 14:21:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:06.365 14:21:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:06.365 14:21:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:06.365 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:06.365 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.365 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.365 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.365 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.365 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.365 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.365 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.365 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9013736 kB' 'MemAvailable: 10524640 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499104 kB' 'Inactive: 1344792 kB' 'Active(anon): 129964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163644 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95808 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.365 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.365 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.366 14:21:13 -- setup/common.sh@33 -- # echo 512 00:07:06.366 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.366 14:21:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:06.366 14:21:13 -- setup/hugepages.sh@112 -- # get_nodes 00:07:06.366 14:21:13 -- setup/hugepages.sh@27 -- # local node 00:07:06.366 14:21:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:06.366 14:21:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:06.366 14:21:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:06.366 14:21:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:06.366 14:21:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:06.366 14:21:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:06.366 14:21:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:06.366 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:06.366 14:21:13 -- setup/common.sh@18 -- # local node=0 00:07:06.366 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.366 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.366 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.366 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:06.366 14:21:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:06.366 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.366 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9013736 kB' 'MemUsed: 3225380 kB' 'SwapCached: 0 kB' 'Active: 499052 kB' 'Inactive: 1344792 kB' 'Active(anon): 129912 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1724416 kB' 'Mapped: 50736 kB' 'AnonPages: 121004 kB' 'Shmem: 10484 kB' 'KernelStack: 6544 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 163640 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.366 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.366 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.367 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.367 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.367 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.367 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.367 node0=512 expecting 512 00:07:06.367 ************************************ 00:07:06.367 END TEST per_node_1G_alloc 00:07:06.367 ************************************ 00:07:06.367 14:21:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:06.367 14:21:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:06.367 14:21:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:06.367 14:21:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:06.367 14:21:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:06.367 00:07:06.367 real 0m0.577s 00:07:06.367 user 0m0.262s 00:07:06.367 sys 0m0.296s 00:07:06.367 14:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.367 14:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.367 14:21:13 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:06.367 14:21:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.367 14:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.367 14:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.367 ************************************ 00:07:06.367 START TEST even_2G_alloc 00:07:06.367 ************************************ 00:07:06.367 14:21:13 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:07:06.367 14:21:13 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:06.367 14:21:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:06.367 14:21:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:06.367 14:21:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:06.367 14:21:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:06.367 14:21:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:06.367 14:21:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:06.367 14:21:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:06.367 14:21:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:06.367 14:21:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:06.367 14:21:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:07:06.367 14:21:13 -- setup/hugepages.sh@83 -- # : 0 00:07:06.367 14:21:13 -- setup/hugepages.sh@84 -- # : 0 00:07:06.367 14:21:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:06.367 14:21:13 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:06.367 14:21:13 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:06.367 14:21:13 -- setup/hugepages.sh@153 -- # setup output 00:07:06.367 14:21:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:06.368 14:21:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:06.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.626 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:06.626 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:06.626 14:21:13 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:06.626 14:21:13 -- setup/hugepages.sh@89 -- # local node 00:07:06.626 14:21:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:06.626 14:21:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:06.626 14:21:13 -- setup/hugepages.sh@92 -- # local surp 00:07:06.626 14:21:13 -- setup/hugepages.sh@93 -- # local resv 00:07:06.626 14:21:13 -- setup/hugepages.sh@94 -- # local anon 00:07:06.626 14:21:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:06.887 14:21:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:06.887 14:21:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:06.887 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.887 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.887 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.887 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.887 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.887 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.887 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.887 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7961800 kB' 'MemAvailable: 9472704 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499260 kB' 'Inactive: 1344792 kB' 'Active(anon): 130120 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121208 kB' 'Mapped: 50860 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163684 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95848 kB' 'KernelStack: 6576 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.887 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.887 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.888 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.888 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.888 14:21:13 -- setup/hugepages.sh@97 -- # anon=0 00:07:06.888 14:21:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:06.888 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:06.888 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.888 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.888 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.888 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.888 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.888 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.888 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.888 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.888 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7962368 kB' 'MemAvailable: 9473272 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499084 kB' 'Inactive: 1344792 kB' 'Active(anon): 129944 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163672 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95836 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.888 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.888 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.889 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.889 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.890 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.890 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.890 14:21:13 -- setup/hugepages.sh@99 -- # surp=0 00:07:06.890 14:21:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:06.890 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:06.890 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.890 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.890 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.890 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.890 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.890 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.890 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.890 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7962708 kB' 'MemAvailable: 9473612 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499104 kB' 'Inactive: 1344792 kB' 'Active(anon): 129964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121068 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163672 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95836 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.890 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.890 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.891 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.891 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.892 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.892 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.892 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.892 nr_hugepages=1024 00:07:06.892 resv_hugepages=0 00:07:06.892 surplus_hugepages=0 00:07:06.892 14:21:13 -- setup/hugepages.sh@100 -- # resv=0 00:07:06.892 14:21:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:06.892 14:21:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:06.892 14:21:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:06.892 anon_hugepages=0 00:07:06.892 14:21:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:06.892 14:21:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:06.892 14:21:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:06.892 14:21:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:06.892 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:06.892 14:21:13 -- setup/common.sh@18 -- # local node= 00:07:06.892 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.892 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.892 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.892 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.892 14:21:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.892 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.892 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.892 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7963228 kB' 'MemAvailable: 9474132 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499040 kB' 'Inactive: 1344792 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120992 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163672 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95836 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.893 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.893 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.894 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.894 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.895 14:21:13 -- setup/common.sh@33 -- # echo 1024 00:07:06.895 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.895 14:21:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:06.895 14:21:13 -- setup/hugepages.sh@112 -- # get_nodes 00:07:06.895 14:21:13 -- setup/hugepages.sh@27 -- # local node 00:07:06.895 14:21:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:06.895 14:21:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:06.895 14:21:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:06.895 14:21:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:06.895 14:21:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:06.895 14:21:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:06.895 14:21:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:06.895 14:21:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:06.895 14:21:13 -- setup/common.sh@18 -- # local node=0 00:07:06.895 14:21:13 -- setup/common.sh@19 -- # local var val 00:07:06.895 14:21:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:06.895 14:21:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.895 14:21:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:06.895 14:21:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:06.895 14:21:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.895 14:21:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7964100 kB' 'MemUsed: 4275016 kB' 'SwapCached: 0 kB' 'Active: 498896 kB' 'Inactive: 1344792 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1724416 kB' 'Mapped: 50736 kB' 'AnonPages: 120916 kB' 'Shmem: 10484 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 163672 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.895 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.895 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # continue 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:06.896 14:21:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:06.896 14:21:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.896 14:21:13 -- setup/common.sh@33 -- # echo 0 00:07:06.896 14:21:13 -- setup/common.sh@33 -- # return 0 00:07:06.896 node0=1024 expecting 1024 00:07:06.896 ************************************ 00:07:06.896 END TEST even_2G_alloc 00:07:06.896 ************************************ 00:07:06.896 14:21:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:06.896 14:21:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:06.896 14:21:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:06.896 14:21:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:06.896 14:21:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:06.896 14:21:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:06.896 00:07:06.896 real 0m0.584s 00:07:06.896 user 0m0.290s 00:07:06.896 sys 0m0.278s 00:07:06.896 14:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.896 14:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.154 14:21:13 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:07.154 14:21:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.154 14:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.154 14:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.154 ************************************ 00:07:07.154 START TEST odd_alloc 00:07:07.154 ************************************ 00:07:07.154 14:21:13 -- common/autotest_common.sh@1114 -- # odd_alloc 00:07:07.154 14:21:13 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:07.154 14:21:13 -- setup/hugepages.sh@49 -- # local size=2098176 00:07:07.154 14:21:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:07.154 14:21:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:07.154 14:21:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:07.154 14:21:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:07.154 14:21:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:07.154 14:21:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:07.154 14:21:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:07.154 14:21:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:07.154 14:21:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:07.154 14:21:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:07.154 14:21:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:07.154 14:21:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:07.154 14:21:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:07.154 14:21:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:07:07.154 14:21:13 -- setup/hugepages.sh@83 -- # : 0 00:07:07.154 14:21:13 -- setup/hugepages.sh@84 -- # : 0 00:07:07.154 14:21:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:07.154 14:21:13 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:07.154 14:21:13 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:07.154 14:21:13 -- setup/hugepages.sh@160 -- # setup output 00:07:07.154 14:21:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:07.154 14:21:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:07.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:07.415 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:07.415 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:07.415 14:21:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:07.415 14:21:14 -- setup/hugepages.sh@89 -- # local node 00:07:07.415 14:21:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:07.415 14:21:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:07.415 14:21:14 -- setup/hugepages.sh@92 -- # local surp 00:07:07.415 14:21:14 -- setup/hugepages.sh@93 -- # local resv 00:07:07.415 14:21:14 -- setup/hugepages.sh@94 -- # local anon 00:07:07.415 14:21:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:07.415 14:21:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:07.415 14:21:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:07.415 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.415 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.415 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.415 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.415 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.415 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.415 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.415 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.415 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.415 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7963772 kB' 'MemAvailable: 9474676 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499520 kB' 'Inactive: 1344792 kB' 'Active(anon): 130380 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121428 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163672 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95836 kB' 'KernelStack: 6564 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.416 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.416 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.417 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.417 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.417 14:21:14 -- setup/hugepages.sh@97 -- # anon=0 00:07:07.417 14:21:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:07.417 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:07.417 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.417 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.417 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.417 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.417 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.417 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.417 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.417 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7963772 kB' 'MemAvailable: 9474676 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499076 kB' 'Inactive: 1344792 kB' 'Active(anon): 129936 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120992 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163672 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95836 kB' 'KernelStack: 6532 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.417 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.417 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.418 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.418 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.419 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.419 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.419 14:21:14 -- setup/hugepages.sh@99 -- # surp=0 00:07:07.419 14:21:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:07.419 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:07.419 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.419 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.419 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.419 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.419 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.419 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.419 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.419 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7964476 kB' 'MemAvailable: 9475380 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499128 kB' 'Inactive: 1344792 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121076 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163676 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95840 kB' 'KernelStack: 6548 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.419 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.419 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.420 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.420 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.421 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.421 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.421 14:21:14 -- setup/hugepages.sh@100 -- # resv=0 00:07:07.421 nr_hugepages=1025 00:07:07.421 14:21:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:07.421 resv_hugepages=0 00:07:07.421 14:21:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:07.421 surplus_hugepages=0 00:07:07.421 14:21:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:07.421 anon_hugepages=0 00:07:07.421 14:21:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:07.421 14:21:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:07.421 14:21:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:07.421 14:21:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:07.421 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:07.421 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.421 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.421 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.421 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.421 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.421 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.421 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.421 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7964768 kB' 'MemAvailable: 9475672 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499272 kB' 'Inactive: 1344792 kB' 'Active(anon): 130132 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121180 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163664 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95828 kB' 'KernelStack: 6532 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.421 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.421 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.422 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.422 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.423 14:21:14 -- setup/common.sh@33 -- # echo 1025 00:07:07.423 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.423 14:21:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:07.423 14:21:14 -- setup/hugepages.sh@112 -- # get_nodes 00:07:07.423 14:21:14 -- setup/hugepages.sh@27 -- # local node 00:07:07.423 14:21:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:07.423 14:21:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:07:07.423 14:21:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:07.423 14:21:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:07.423 14:21:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:07.423 14:21:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:07.423 14:21:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:07.423 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:07.423 14:21:14 -- setup/common.sh@18 -- # local node=0 00:07:07.423 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.423 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.423 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.423 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:07.423 14:21:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:07.423 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.423 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7964768 kB' 'MemUsed: 4274348 kB' 'SwapCached: 0 kB' 'Active: 498916 kB' 'Inactive: 1344792 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724416 kB' 'Mapped: 50788 kB' 'AnonPages: 121144 kB' 'Shmem: 10484 kB' 'KernelStack: 6532 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 163652 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.423 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.423 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.682 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.682 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.683 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.683 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.683 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.683 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.683 14:21:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:07.683 14:21:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:07.683 14:21:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:07.683 node0=1025 expecting 1025 00:07:07.683 14:21:14 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:07:07.683 14:21:14 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:07:07.683 00:07:07.683 real 0m0.530s 00:07:07.683 user 0m0.269s 00:07:07.683 sys 0m0.291s 00:07:07.683 14:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.683 14:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:07.683 ************************************ 00:07:07.683 END TEST odd_alloc 00:07:07.683 ************************************ 00:07:07.683 14:21:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:07.683 14:21:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.683 14:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.683 14:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:07.683 ************************************ 00:07:07.683 START TEST custom_alloc 00:07:07.683 ************************************ 00:07:07.683 14:21:14 -- common/autotest_common.sh@1114 -- # custom_alloc 00:07:07.683 14:21:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:07:07.683 14:21:14 -- setup/hugepages.sh@169 -- # local node 00:07:07.683 14:21:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:07.683 14:21:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:07.683 14:21:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:07.683 14:21:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:07.683 14:21:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:07:07.683 14:21:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:07.683 14:21:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:07.683 14:21:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:07.683 14:21:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:07.683 14:21:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:07.683 14:21:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:07.683 14:21:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:07.683 14:21:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:07.683 14:21:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:07.683 14:21:14 -- setup/hugepages.sh@83 -- # : 0 00:07:07.683 14:21:14 -- setup/hugepages.sh@84 -- # : 0 00:07:07.683 14:21:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:07.683 14:21:14 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:07.683 14:21:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:07.683 14:21:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:07.683 14:21:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:07.683 14:21:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:07.683 14:21:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:07.683 14:21:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:07.683 14:21:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:07.683 14:21:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:07.683 14:21:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:07.683 14:21:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:07.683 14:21:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:07.683 14:21:14 -- setup/hugepages.sh@78 -- # return 0 00:07:07.683 14:21:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:07:07.683 14:21:14 -- setup/hugepages.sh@187 -- # setup output 00:07:07.683 14:21:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:07.683 14:21:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:07.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:07.944 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:07.944 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:07.944 14:21:14 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:07:07.944 14:21:14 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:07.944 14:21:14 -- setup/hugepages.sh@89 -- # local node 00:07:07.944 14:21:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:07.944 14:21:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:07.944 14:21:14 -- setup/hugepages.sh@92 -- # local surp 00:07:07.944 14:21:14 -- setup/hugepages.sh@93 -- # local resv 00:07:07.944 14:21:14 -- setup/hugepages.sh@94 -- # local anon 00:07:07.944 14:21:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:07.944 14:21:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:07.944 14:21:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:07.944 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.944 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.944 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.944 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.944 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.944 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.944 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.944 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.944 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.944 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9015240 kB' 'MemAvailable: 10526144 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499528 kB' 'Inactive: 1344792 kB' 'Active(anon): 130388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121252 kB' 'Mapped: 50884 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163744 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95908 kB' 'KernelStack: 6560 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.945 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.945 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.946 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.946 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.946 14:21:14 -- setup/hugepages.sh@97 -- # anon=0 00:07:07.946 14:21:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:07.946 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:07.946 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.946 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.946 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.946 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.946 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.946 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.946 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.946 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9015240 kB' 'MemAvailable: 10526144 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499280 kB' 'Inactive: 1344792 kB' 'Active(anon): 130140 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121036 kB' 'Mapped: 50884 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163744 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95908 kB' 'KernelStack: 6528 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.946 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.946 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.947 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.947 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.947 14:21:14 -- setup/hugepages.sh@99 -- # surp=0 00:07:07.947 14:21:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:07.947 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:07.947 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.947 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.947 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.947 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.947 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.947 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.947 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.947 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9015240 kB' 'MemAvailable: 10526144 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499048 kB' 'Inactive: 1344792 kB' 'Active(anon): 129908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120996 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163736 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95900 kB' 'KernelStack: 6544 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.947 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.947 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.948 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.948 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.948 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:07.948 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:07.948 14:21:14 -- setup/hugepages.sh@100 -- # resv=0 00:07:07.948 14:21:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:07.948 nr_hugepages=512 00:07:07.948 resv_hugepages=0 00:07:07.948 14:21:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:07.948 surplus_hugepages=0 00:07:07.948 14:21:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:07.948 14:21:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:07.948 anon_hugepages=0 00:07:07.948 14:21:14 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:07.948 14:21:14 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:07.948 14:21:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:07.948 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:07.948 14:21:14 -- setup/common.sh@18 -- # local node= 00:07:07.948 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:07.948 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:07.948 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.949 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.949 14:21:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.949 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.949 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9015240 kB' 'MemAvailable: 10526144 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499040 kB' 'Inactive: 1344792 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120992 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163720 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95884 kB' 'KernelStack: 6544 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.949 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.949 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.950 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.950 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.950 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:07.950 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.950 14:21:14 -- setup/common.sh@32 -- # continue 00:07:07.950 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:07.950 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.207 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.207 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.208 14:21:14 -- setup/common.sh@33 -- # echo 512 00:07:08.208 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:08.208 14:21:14 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:08.208 14:21:14 -- setup/hugepages.sh@112 -- # get_nodes 00:07:08.208 14:21:14 -- setup/hugepages.sh@27 -- # local node 00:07:08.208 14:21:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.208 14:21:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:08.208 14:21:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:08.208 14:21:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:08.208 14:21:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:08.208 14:21:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:08.208 14:21:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:08.208 14:21:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:08.208 14:21:14 -- setup/common.sh@18 -- # local node=0 00:07:08.208 14:21:14 -- setup/common.sh@19 -- # local var val 00:07:08.208 14:21:14 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.208 14:21:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.208 14:21:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:08.208 14:21:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:08.208 14:21:14 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.208 14:21:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9015636 kB' 'MemUsed: 3223480 kB' 'SwapCached: 0 kB' 'Active: 499004 kB' 'Inactive: 1344792 kB' 'Active(anon): 129864 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724416 kB' 'Mapped: 50736 kB' 'AnonPages: 120956 kB' 'Shmem: 10484 kB' 'KernelStack: 6528 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 163716 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.208 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.208 14:21:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # continue 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.209 14:21:14 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.209 14:21:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.209 14:21:14 -- setup/common.sh@33 -- # echo 0 00:07:08.209 14:21:14 -- setup/common.sh@33 -- # return 0 00:07:08.209 14:21:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:08.209 14:21:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:08.209 14:21:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:08.209 14:21:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:08.209 14:21:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:08.209 node0=512 expecting 512 00:07:08.209 14:21:14 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:08.209 00:07:08.209 real 0m0.497s 00:07:08.209 user 0m0.256s 00:07:08.209 sys 0m0.272s 00:07:08.209 14:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.209 14:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.209 ************************************ 00:07:08.209 END TEST custom_alloc 00:07:08.209 ************************************ 00:07:08.209 14:21:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:08.209 14:21:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.209 14:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.209 14:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.209 ************************************ 00:07:08.209 START TEST no_shrink_alloc 00:07:08.209 ************************************ 00:07:08.209 14:21:14 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:07:08.209 14:21:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:08.209 14:21:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:08.209 14:21:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:08.209 14:21:14 -- setup/hugepages.sh@51 -- # shift 00:07:08.209 14:21:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:08.209 14:21:14 -- setup/hugepages.sh@52 -- # local node_ids 00:07:08.209 14:21:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:08.209 14:21:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:08.209 14:21:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:08.209 14:21:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:08.209 14:21:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:08.209 14:21:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:08.209 14:21:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:08.209 14:21:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:08.209 14:21:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:08.209 14:21:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:08.209 14:21:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:08.209 14:21:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:08.209 14:21:14 -- setup/hugepages.sh@73 -- # return 0 00:07:08.209 14:21:14 -- setup/hugepages.sh@198 -- # setup output 00:07:08.209 14:21:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:08.209 14:21:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:08.468 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:08.468 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:08.468 14:21:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:08.468 14:21:15 -- setup/hugepages.sh@89 -- # local node 00:07:08.468 14:21:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:08.468 14:21:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:08.468 14:21:15 -- setup/hugepages.sh@92 -- # local surp 00:07:08.468 14:21:15 -- setup/hugepages.sh@93 -- # local resv 00:07:08.468 14:21:15 -- setup/hugepages.sh@94 -- # local anon 00:07:08.468 14:21:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:08.468 14:21:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:08.468 14:21:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:08.468 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.468 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.468 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.468 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.468 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.468 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.468 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.468 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7969844 kB' 'MemAvailable: 9480748 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499192 kB' 'Inactive: 1344792 kB' 'Active(anon): 130052 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121144 kB' 'Mapped: 50844 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163752 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95916 kB' 'KernelStack: 6560 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 325804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.468 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.468 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.469 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:08.469 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.469 14:21:15 -- setup/hugepages.sh@97 -- # anon=0 00:07:08.469 14:21:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:08.469 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:08.469 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.469 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.469 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.469 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.469 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.469 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.469 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.469 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7970104 kB' 'MemAvailable: 9481008 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499128 kB' 'Inactive: 1344792 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121216 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163772 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95936 kB' 'KernelStack: 6592 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 325048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.469 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.469 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.470 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.470 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:08.470 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.470 14:21:15 -- setup/hugepages.sh@99 -- # surp=0 00:07:08.470 14:21:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:08.470 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:08.470 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.470 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.470 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.470 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.470 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.470 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.470 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.470 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.470 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.471 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7969940 kB' 'MemAvailable: 9480844 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499108 kB' 'Inactive: 1344792 kB' 'Active(anon): 129968 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163756 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95920 kB' 'KernelStack: 6544 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 325048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.471 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.471 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.729 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.729 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.730 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:08.730 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.730 14:21:15 -- setup/hugepages.sh@100 -- # resv=0 00:07:08.730 nr_hugepages=1024 00:07:08.730 14:21:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:08.730 resv_hugepages=0 00:07:08.730 14:21:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:08.730 surplus_hugepages=0 00:07:08.730 14:21:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:08.730 anon_hugepages=0 00:07:08.730 14:21:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:08.730 14:21:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:08.730 14:21:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:08.730 14:21:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:08.730 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:08.730 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.730 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.730 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.730 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.730 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.730 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.730 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.730 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7969940 kB' 'MemAvailable: 9480844 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 499088 kB' 'Inactive: 1344792 kB' 'Active(anon): 129948 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67836 kB' 'Slab: 163748 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95912 kB' 'KernelStack: 6560 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 325048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.730 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.730 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.731 14:21:15 -- setup/common.sh@33 -- # echo 1024 00:07:08.731 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.731 14:21:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:08.731 14:21:15 -- setup/hugepages.sh@112 -- # get_nodes 00:07:08.731 14:21:15 -- setup/hugepages.sh@27 -- # local node 00:07:08.731 14:21:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.731 14:21:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:08.731 14:21:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:08.731 14:21:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:08.731 14:21:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:08.731 14:21:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:08.731 14:21:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:08.731 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:08.731 14:21:15 -- setup/common.sh@18 -- # local node=0 00:07:08.731 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.731 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.731 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.731 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:08.731 14:21:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:08.731 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.731 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7969940 kB' 'MemUsed: 4269176 kB' 'SwapCached: 0 kB' 'Active: 499024 kB' 'Inactive: 1344792 kB' 'Active(anon): 129884 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724416 kB' 'Mapped: 50736 kB' 'AnonPages: 121024 kB' 'Shmem: 10484 kB' 'KernelStack: 6544 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67836 kB' 'Slab: 163748 kB' 'SReclaimable: 67836 kB' 'SUnreclaim: 95912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.731 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.731 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.732 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.732 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.732 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:08.732 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.732 14:21:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:08.732 14:21:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:08.732 14:21:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:08.732 14:21:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:08.732 node0=1024 expecting 1024 00:07:08.732 14:21:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:08.732 14:21:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:08.732 14:21:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:08.732 14:21:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:08.732 14:21:15 -- setup/hugepages.sh@202 -- # setup output 00:07:08.732 14:21:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:08.732 14:21:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:08.990 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:08.990 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:08.990 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:08.990 14:21:15 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:08.990 14:21:15 -- setup/hugepages.sh@89 -- # local node 00:07:08.990 14:21:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:08.990 14:21:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:08.990 14:21:15 -- setup/hugepages.sh@92 -- # local surp 00:07:08.990 14:21:15 -- setup/hugepages.sh@93 -- # local resv 00:07:08.990 14:21:15 -- setup/hugepages.sh@94 -- # local anon 00:07:08.990 14:21:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:08.990 14:21:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:08.990 14:21:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:08.990 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.990 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.990 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.990 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.990 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.990 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.990 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.990 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7973320 kB' 'MemAvailable: 9484208 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 496680 kB' 'Inactive: 1344792 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118668 kB' 'Mapped: 50180 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163448 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95644 kB' 'KernelStack: 6472 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.990 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.990 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.991 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:08.991 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.991 14:21:15 -- setup/hugepages.sh@97 -- # anon=0 00:07:08.991 14:21:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:08.991 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:08.991 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.991 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.991 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.991 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.991 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.991 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.991 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.991 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7973320 kB' 'MemAvailable: 9484208 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 496336 kB' 'Inactive: 1344792 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118276 kB' 'Mapped: 49924 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163440 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95636 kB' 'KernelStack: 6432 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.991 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.991 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.992 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.992 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.993 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.993 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.993 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.993 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # continue 00:07:08.993 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.993 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.993 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.993 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:08.993 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:08.993 14:21:15 -- setup/hugepages.sh@99 -- # surp=0 00:07:08.993 14:21:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:08.993 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:08.993 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:08.993 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:08.993 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.993 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.993 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.993 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.993 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.993 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7973320 kB' 'MemAvailable: 9484208 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 496276 kB' 'Inactive: 1344792 kB' 'Active(anon): 127136 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118244 kB' 'Mapped: 49924 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163440 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95636 kB' 'KernelStack: 6448 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.252 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.252 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.253 14:21:15 -- setup/common.sh@33 -- # echo 0 00:07:09.253 14:21:15 -- setup/common.sh@33 -- # return 0 00:07:09.253 14:21:15 -- setup/hugepages.sh@100 -- # resv=0 00:07:09.253 nr_hugepages=1024 00:07:09.253 14:21:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:09.253 resv_hugepages=0 00:07:09.253 14:21:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:09.253 surplus_hugepages=0 00:07:09.253 14:21:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:09.253 anon_hugepages=0 00:07:09.253 14:21:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:09.253 14:21:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:09.253 14:21:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:09.253 14:21:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:09.253 14:21:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:09.253 14:21:15 -- setup/common.sh@18 -- # local node= 00:07:09.253 14:21:15 -- setup/common.sh@19 -- # local var val 00:07:09.253 14:21:15 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.253 14:21:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.253 14:21:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.253 14:21:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.253 14:21:15 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.253 14:21:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7973320 kB' 'MemAvailable: 9484208 kB' 'Buffers: 3704 kB' 'Cached: 1720712 kB' 'SwapCached: 0 kB' 'Active: 495984 kB' 'Inactive: 1344792 kB' 'Active(anon): 126844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117980 kB' 'Mapped: 49924 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163440 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95636 kB' 'KernelStack: 6480 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.253 14:21:15 -- setup/common.sh@32 -- # continue 00:07:09.253 14:21:15 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.254 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.254 14:21:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.254 14:21:16 -- setup/common.sh@33 -- # echo 1024 00:07:09.254 14:21:16 -- setup/common.sh@33 -- # return 0 00:07:09.254 14:21:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:09.254 14:21:16 -- setup/hugepages.sh@112 -- # get_nodes 00:07:09.254 14:21:16 -- setup/hugepages.sh@27 -- # local node 00:07:09.254 14:21:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:09.254 14:21:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:09.254 14:21:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:09.254 14:21:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:09.255 14:21:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:09.255 14:21:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:09.255 14:21:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:09.255 14:21:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:09.255 14:21:16 -- setup/common.sh@18 -- # local node=0 00:07:09.255 14:21:16 -- setup/common.sh@19 -- # local var val 00:07:09.255 14:21:16 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.255 14:21:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.255 14:21:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:09.255 14:21:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:09.255 14:21:16 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.255 14:21:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7973320 kB' 'MemUsed: 4265796 kB' 'SwapCached: 0 kB' 'Active: 496004 kB' 'Inactive: 1344792 kB' 'Active(anon): 126864 kB' 'Inactive(anon): 0 kB' 'Active(file): 369140 kB' 'Inactive(file): 1344792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724420 kB' 'Mapped: 49924 kB' 'AnonPages: 118016 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 163424 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.255 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.255 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.256 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.256 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.256 14:21:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.256 14:21:16 -- setup/common.sh@32 -- # continue 00:07:09.256 14:21:16 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.256 14:21:16 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.256 14:21:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.256 14:21:16 -- setup/common.sh@33 -- # echo 0 00:07:09.256 14:21:16 -- setup/common.sh@33 -- # return 0 00:07:09.256 14:21:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:09.256 14:21:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:09.256 14:21:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:09.256 14:21:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:09.256 node0=1024 expecting 1024 00:07:09.256 14:21:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:09.256 14:21:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:09.256 00:07:09.256 real 0m1.075s 00:07:09.256 user 0m0.545s 00:07:09.256 sys 0m0.594s 00:07:09.256 14:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.256 14:21:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.256 ************************************ 00:07:09.256 END TEST no_shrink_alloc 00:07:09.256 ************************************ 00:07:09.256 14:21:16 -- setup/hugepages.sh@217 -- # clear_hp 00:07:09.256 14:21:16 -- setup/hugepages.sh@37 -- # local node hp 00:07:09.256 14:21:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:09.256 14:21:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.256 14:21:16 -- setup/hugepages.sh@41 -- # echo 0 00:07:09.256 14:21:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.256 14:21:16 -- setup/hugepages.sh@41 -- # echo 0 00:07:09.256 14:21:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:09.256 14:21:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:09.256 00:07:09.256 real 0m4.929s 00:07:09.256 user 0m2.416s 00:07:09.256 sys 0m2.488s 00:07:09.256 14:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.256 ************************************ 00:07:09.256 END TEST hugepages 00:07:09.256 ************************************ 00:07:09.256 14:21:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.256 14:21:16 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:09.256 14:21:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.256 14:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.256 14:21:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.256 ************************************ 00:07:09.256 START TEST driver 00:07:09.256 ************************************ 00:07:09.256 14:21:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:09.514 * Looking for test storage... 00:07:09.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:09.514 14:21:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.514 14:21:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.514 14:21:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.514 14:21:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.514 14:21:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.514 14:21:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.514 14:21:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.514 14:21:16 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.514 14:21:16 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.514 14:21:16 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.514 14:21:16 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.514 14:21:16 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.514 14:21:16 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.514 14:21:16 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.514 14:21:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.514 14:21:16 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.514 14:21:16 -- scripts/common.sh@344 -- # : 1 00:07:09.514 14:21:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.514 14:21:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.514 14:21:16 -- scripts/common.sh@364 -- # decimal 1 00:07:09.514 14:21:16 -- scripts/common.sh@352 -- # local d=1 00:07:09.514 14:21:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.514 14:21:16 -- scripts/common.sh@354 -- # echo 1 00:07:09.514 14:21:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.514 14:21:16 -- scripts/common.sh@365 -- # decimal 2 00:07:09.514 14:21:16 -- scripts/common.sh@352 -- # local d=2 00:07:09.514 14:21:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.514 14:21:16 -- scripts/common.sh@354 -- # echo 2 00:07:09.514 14:21:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.514 14:21:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.514 14:21:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.514 14:21:16 -- scripts/common.sh@367 -- # return 0 00:07:09.515 14:21:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.515 14:21:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.515 --rc genhtml_branch_coverage=1 00:07:09.515 --rc genhtml_function_coverage=1 00:07:09.515 --rc genhtml_legend=1 00:07:09.515 --rc geninfo_all_blocks=1 00:07:09.515 --rc geninfo_unexecuted_blocks=1 00:07:09.515 00:07:09.515 ' 00:07:09.515 14:21:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.515 --rc genhtml_branch_coverage=1 00:07:09.515 --rc genhtml_function_coverage=1 00:07:09.515 --rc genhtml_legend=1 00:07:09.515 --rc geninfo_all_blocks=1 00:07:09.515 --rc geninfo_unexecuted_blocks=1 00:07:09.515 00:07:09.515 ' 00:07:09.515 14:21:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.515 --rc genhtml_branch_coverage=1 00:07:09.515 --rc genhtml_function_coverage=1 00:07:09.515 --rc genhtml_legend=1 00:07:09.515 --rc geninfo_all_blocks=1 00:07:09.515 --rc geninfo_unexecuted_blocks=1 00:07:09.515 00:07:09.515 ' 00:07:09.515 14:21:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.515 --rc genhtml_branch_coverage=1 00:07:09.515 --rc genhtml_function_coverage=1 00:07:09.515 --rc genhtml_legend=1 00:07:09.515 --rc geninfo_all_blocks=1 00:07:09.515 --rc geninfo_unexecuted_blocks=1 00:07:09.515 00:07:09.515 ' 00:07:09.515 14:21:16 -- setup/driver.sh@68 -- # setup reset 00:07:09.515 14:21:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:09.515 14:21:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:10.080 14:21:16 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:10.080 14:21:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.080 14:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.080 14:21:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.080 ************************************ 00:07:10.080 START TEST guess_driver 00:07:10.080 ************************************ 00:07:10.080 14:21:16 -- common/autotest_common.sh@1114 -- # guess_driver 00:07:10.080 14:21:16 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:10.080 14:21:16 -- setup/driver.sh@47 -- # local fail=0 00:07:10.080 14:21:16 -- setup/driver.sh@49 -- # pick_driver 00:07:10.080 14:21:16 -- setup/driver.sh@36 -- # vfio 00:07:10.080 14:21:16 -- setup/driver.sh@21 -- # local iommu_grups 00:07:10.080 14:21:16 -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:10.081 14:21:16 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:10.081 14:21:16 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:10.081 14:21:16 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:10.081 14:21:16 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:07:10.081 14:21:16 -- setup/driver.sh@32 -- # return 1 00:07:10.081 14:21:16 -- setup/driver.sh@38 -- # uio 00:07:10.081 14:21:16 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:10.081 14:21:16 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:10.081 14:21:16 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:10.081 14:21:16 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:10.081 14:21:17 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:07:10.081 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:07:10.081 14:21:17 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:10.081 Looking for driver=uio_pci_generic 00:07:10.081 14:21:17 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:10.081 14:21:17 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:10.081 14:21:17 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:10.081 14:21:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:10.081 14:21:17 -- setup/driver.sh@45 -- # setup output config 00:07:10.081 14:21:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:10.081 14:21:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:11.013 14:21:17 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:11.013 14:21:17 -- setup/driver.sh@58 -- # continue 00:07:11.013 14:21:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:11.013 14:21:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:11.013 14:21:17 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:11.013 14:21:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:11.013 14:21:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:11.013 14:21:17 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:11.013 14:21:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:11.013 14:21:17 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:11.013 14:21:17 -- setup/driver.sh@65 -- # setup reset 00:07:11.013 14:21:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:11.013 14:21:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:11.578 00:07:11.578 real 0m1.510s 00:07:11.578 user 0m0.599s 00:07:11.578 sys 0m0.938s 00:07:11.578 14:21:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.578 14:21:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.578 ************************************ 00:07:11.578 END TEST guess_driver 00:07:11.578 ************************************ 00:07:11.578 00:07:11.578 real 0m2.383s 00:07:11.578 user 0m0.953s 00:07:11.578 sys 0m1.530s 00:07:11.578 14:21:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.578 14:21:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.578 ************************************ 00:07:11.578 END TEST driver 00:07:11.578 ************************************ 00:07:11.836 14:21:18 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:11.836 14:21:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:11.836 14:21:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.836 14:21:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.836 ************************************ 00:07:11.836 START TEST devices 00:07:11.836 ************************************ 00:07:11.836 14:21:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:11.836 * Looking for test storage... 00:07:11.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:11.836 14:21:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:11.836 14:21:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:11.836 14:21:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:11.836 14:21:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:11.836 14:21:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:11.836 14:21:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:11.836 14:21:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:11.836 14:21:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:11.836 14:21:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:11.836 14:21:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.836 14:21:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:12.094 14:21:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:12.094 14:21:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:12.094 14:21:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:12.094 14:21:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:12.094 14:21:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:12.094 14:21:18 -- scripts/common.sh@344 -- # : 1 00:07:12.094 14:21:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:12.094 14:21:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.094 14:21:18 -- scripts/common.sh@364 -- # decimal 1 00:07:12.094 14:21:18 -- scripts/common.sh@352 -- # local d=1 00:07:12.094 14:21:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.094 14:21:18 -- scripts/common.sh@354 -- # echo 1 00:07:12.094 14:21:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:12.094 14:21:18 -- scripts/common.sh@365 -- # decimal 2 00:07:12.094 14:21:18 -- scripts/common.sh@352 -- # local d=2 00:07:12.094 14:21:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.094 14:21:18 -- scripts/common.sh@354 -- # echo 2 00:07:12.094 14:21:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:12.094 14:21:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:12.094 14:21:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:12.094 14:21:18 -- scripts/common.sh@367 -- # return 0 00:07:12.094 14:21:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.094 14:21:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:12.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.094 --rc genhtml_branch_coverage=1 00:07:12.094 --rc genhtml_function_coverage=1 00:07:12.094 --rc genhtml_legend=1 00:07:12.094 --rc geninfo_all_blocks=1 00:07:12.094 --rc geninfo_unexecuted_blocks=1 00:07:12.094 00:07:12.094 ' 00:07:12.094 14:21:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:12.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.094 --rc genhtml_branch_coverage=1 00:07:12.094 --rc genhtml_function_coverage=1 00:07:12.094 --rc genhtml_legend=1 00:07:12.094 --rc geninfo_all_blocks=1 00:07:12.094 --rc geninfo_unexecuted_blocks=1 00:07:12.094 00:07:12.094 ' 00:07:12.094 14:21:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:12.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.094 --rc genhtml_branch_coverage=1 00:07:12.094 --rc genhtml_function_coverage=1 00:07:12.094 --rc genhtml_legend=1 00:07:12.094 --rc geninfo_all_blocks=1 00:07:12.094 --rc geninfo_unexecuted_blocks=1 00:07:12.094 00:07:12.094 ' 00:07:12.094 14:21:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:12.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.094 --rc genhtml_branch_coverage=1 00:07:12.094 --rc genhtml_function_coverage=1 00:07:12.094 --rc genhtml_legend=1 00:07:12.094 --rc geninfo_all_blocks=1 00:07:12.094 --rc geninfo_unexecuted_blocks=1 00:07:12.094 00:07:12.094 ' 00:07:12.094 14:21:18 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:12.094 14:21:18 -- setup/devices.sh@192 -- # setup reset 00:07:12.094 14:21:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:12.094 14:21:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:13.029 14:21:19 -- setup/devices.sh@194 -- # get_zoned_devs 00:07:13.029 14:21:19 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:07:13.029 14:21:19 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:07:13.029 14:21:19 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:07:13.029 14:21:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:13.029 14:21:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:07:13.029 14:21:19 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:07:13.029 14:21:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:13.029 14:21:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:07:13.029 14:21:19 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:07:13.029 14:21:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:13.029 14:21:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:07:13.029 14:21:19 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:07:13.029 14:21:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:13.029 14:21:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:07:13.029 14:21:19 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:07:13.029 14:21:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:13.029 14:21:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:13.029 14:21:19 -- setup/devices.sh@196 -- # blocks=() 00:07:13.029 14:21:19 -- setup/devices.sh@196 -- # declare -a blocks 00:07:13.029 14:21:19 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:13.029 14:21:19 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:13.029 14:21:19 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:13.029 14:21:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:13.029 14:21:19 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:13.029 14:21:19 -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:13.029 14:21:19 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:07:13.029 14:21:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:13.029 14:21:19 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:07:13.029 14:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:13.029 No valid GPT data, bailing 00:07:13.029 14:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:13.029 14:21:19 -- scripts/common.sh@393 -- # pt= 00:07:13.029 14:21:19 -- scripts/common.sh@394 -- # return 1 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:13.029 14:21:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:13.029 14:21:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:13.029 14:21:19 -- setup/common.sh@80 -- # echo 5368709120 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:13.029 14:21:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:13.029 14:21:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:07:13.029 14:21:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:13.029 14:21:19 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:07:13.029 14:21:19 -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:13.029 14:21:19 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:07:13.029 14:21:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:07:13.029 14:21:19 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:07:13.029 14:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:07:13.029 No valid GPT data, bailing 00:07:13.029 14:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:13.029 14:21:19 -- scripts/common.sh@393 -- # pt= 00:07:13.029 14:21:19 -- scripts/common.sh@394 -- # return 1 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:07:13.029 14:21:19 -- setup/common.sh@76 -- # local dev=nvme1n1 00:07:13.029 14:21:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:07:13.029 14:21:19 -- setup/common.sh@80 -- # echo 4294967296 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:13.029 14:21:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:13.029 14:21:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:07:13.029 14:21:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:13.029 14:21:19 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:07:13.029 14:21:19 -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:13.029 14:21:19 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:07:13.029 14:21:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:07:13.029 14:21:19 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:07:13.029 14:21:19 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:07:13.029 14:21:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:07:13.029 No valid GPT data, bailing 00:07:13.029 14:21:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:13.287 14:21:19 -- scripts/common.sh@393 -- # pt= 00:07:13.287 14:21:19 -- scripts/common.sh@394 -- # return 1 00:07:13.287 14:21:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:07:13.287 14:21:19 -- setup/common.sh@76 -- # local dev=nvme1n2 00:07:13.287 14:21:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:07:13.287 14:21:20 -- setup/common.sh@80 -- # echo 4294967296 00:07:13.287 14:21:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:13.287 14:21:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:13.287 14:21:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:07:13.287 14:21:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:13.287 14:21:20 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:07:13.287 14:21:20 -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:13.287 14:21:20 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:07:13.287 14:21:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:07:13.287 14:21:20 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:07:13.287 14:21:20 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:07:13.287 14:21:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:07:13.287 No valid GPT data, bailing 00:07:13.287 14:21:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:13.287 14:21:20 -- scripts/common.sh@393 -- # pt= 00:07:13.288 14:21:20 -- scripts/common.sh@394 -- # return 1 00:07:13.288 14:21:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:07:13.288 14:21:20 -- setup/common.sh@76 -- # local dev=nvme1n3 00:07:13.288 14:21:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:07:13.288 14:21:20 -- setup/common.sh@80 -- # echo 4294967296 00:07:13.288 14:21:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:13.288 14:21:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:13.288 14:21:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:07:13.288 14:21:20 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:07:13.288 14:21:20 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:13.288 14:21:20 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:13.288 14:21:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.288 14:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.288 14:21:20 -- common/autotest_common.sh@10 -- # set +x 00:07:13.288 ************************************ 00:07:13.288 START TEST nvme_mount 00:07:13.288 ************************************ 00:07:13.288 14:21:20 -- common/autotest_common.sh@1114 -- # nvme_mount 00:07:13.288 14:21:20 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:13.288 14:21:20 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:13.288 14:21:20 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:13.288 14:21:20 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:13.288 14:21:20 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:13.288 14:21:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:13.288 14:21:20 -- setup/common.sh@40 -- # local part_no=1 00:07:13.288 14:21:20 -- setup/common.sh@41 -- # local size=1073741824 00:07:13.288 14:21:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:13.288 14:21:20 -- setup/common.sh@44 -- # parts=() 00:07:13.288 14:21:20 -- setup/common.sh@44 -- # local parts 00:07:13.288 14:21:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:13.288 14:21:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:13.288 14:21:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:13.288 14:21:20 -- setup/common.sh@46 -- # (( part++ )) 00:07:13.288 14:21:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:13.288 14:21:20 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:13.288 14:21:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:13.288 14:21:20 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:14.220 Creating new GPT entries in memory. 00:07:14.220 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:14.220 other utilities. 00:07:14.220 14:21:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:14.220 14:21:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:14.220 14:21:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:14.220 14:21:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:14.220 14:21:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:15.590 Creating new GPT entries in memory. 00:07:15.590 The operation has completed successfully. 00:07:15.591 14:21:22 -- setup/common.sh@57 -- # (( part++ )) 00:07:15.591 14:21:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:15.591 14:21:22 -- setup/common.sh@62 -- # wait 53910 00:07:15.591 14:21:22 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:15.591 14:21:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:07:15.591 14:21:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:15.591 14:21:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:15.591 14:21:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:15.591 14:21:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:15.591 14:21:22 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:15.591 14:21:22 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:15.591 14:21:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:15.591 14:21:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:15.591 14:21:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:15.591 14:21:22 -- setup/devices.sh@53 -- # local found=0 00:07:15.591 14:21:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:15.591 14:21:22 -- setup/devices.sh@56 -- # : 00:07:15.591 14:21:22 -- setup/devices.sh@59 -- # local pci status 00:07:15.591 14:21:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.591 14:21:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:15.591 14:21:22 -- setup/devices.sh@47 -- # setup output config 00:07:15.591 14:21:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:15.591 14:21:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:15.591 14:21:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:15.591 14:21:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:15.591 14:21:22 -- setup/devices.sh@63 -- # found=1 00:07:15.591 14:21:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.591 14:21:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:15.591 14:21:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.154 14:21:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:16.154 14:21:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.154 14:21:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:16.154 14:21:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.154 14:21:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:16.154 14:21:22 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:16.154 14:21:22 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:16.154 14:21:22 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:16.154 14:21:22 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:16.154 14:21:22 -- setup/devices.sh@110 -- # cleanup_nvme 00:07:16.154 14:21:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:16.154 14:21:22 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:16.154 14:21:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:16.154 14:21:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:16.154 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:16.154 14:21:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:16.154 14:21:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:16.414 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:16.414 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:16.414 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:16.414 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:16.414 14:21:23 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:07:16.414 14:21:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:07:16.414 14:21:23 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:16.414 14:21:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:16.414 14:21:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:16.414 14:21:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:16.414 14:21:23 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:16.414 14:21:23 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:16.414 14:21:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:16.414 14:21:23 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:16.414 14:21:23 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:16.414 14:21:23 -- setup/devices.sh@53 -- # local found=0 00:07:16.414 14:21:23 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:16.414 14:21:23 -- setup/devices.sh@56 -- # : 00:07:16.414 14:21:23 -- setup/devices.sh@59 -- # local pci status 00:07:16.414 14:21:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.414 14:21:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:16.414 14:21:23 -- setup/devices.sh@47 -- # setup output config 00:07:16.414 14:21:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:16.414 14:21:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:16.705 14:21:23 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:16.705 14:21:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:16.705 14:21:23 -- setup/devices.sh@63 -- # found=1 00:07:16.705 14:21:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.705 14:21:23 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:16.705 14:21:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.964 14:21:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:16.964 14:21:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.964 14:21:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:16.964 14:21:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.222 14:21:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:17.222 14:21:24 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:17.222 14:21:24 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.222 14:21:24 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:17.222 14:21:24 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:17.222 14:21:24 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.222 14:21:24 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:07:17.222 14:21:24 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:17.222 14:21:24 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:17.222 14:21:24 -- setup/devices.sh@50 -- # local mount_point= 00:07:17.222 14:21:24 -- setup/devices.sh@51 -- # local test_file= 00:07:17.222 14:21:24 -- setup/devices.sh@53 -- # local found=0 00:07:17.222 14:21:24 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:17.222 14:21:24 -- setup/devices.sh@59 -- # local pci status 00:07:17.222 14:21:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.222 14:21:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:17.222 14:21:24 -- setup/devices.sh@47 -- # setup output config 00:07:17.222 14:21:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:17.222 14:21:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:17.480 14:21:24 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:17.480 14:21:24 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:17.480 14:21:24 -- setup/devices.sh@63 -- # found=1 00:07:17.480 14:21:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.480 14:21:24 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:17.480 14:21:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.737 14:21:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:17.737 14:21:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.737 14:21:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:17.737 14:21:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:17.995 14:21:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:17.995 14:21:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:17.995 14:21:24 -- setup/devices.sh@68 -- # return 0 00:07:17.995 14:21:24 -- setup/devices.sh@128 -- # cleanup_nvme 00:07:17.995 14:21:24 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:17.995 14:21:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:17.995 14:21:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:17.995 14:21:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:17.995 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:17.995 00:07:17.995 real 0m4.638s 00:07:17.995 user 0m1.171s 00:07:17.995 sys 0m1.187s 00:07:17.995 14:21:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.995 ************************************ 00:07:17.995 END TEST nvme_mount 00:07:17.995 ************************************ 00:07:17.995 14:21:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.995 14:21:24 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:17.995 14:21:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.995 14:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.995 14:21:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.995 ************************************ 00:07:17.995 START TEST dm_mount 00:07:17.995 ************************************ 00:07:17.995 14:21:24 -- common/autotest_common.sh@1114 -- # dm_mount 00:07:17.995 14:21:24 -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:17.995 14:21:24 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:17.995 14:21:24 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:17.995 14:21:24 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:17.995 14:21:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:17.995 14:21:24 -- setup/common.sh@40 -- # local part_no=2 00:07:17.995 14:21:24 -- setup/common.sh@41 -- # local size=1073741824 00:07:17.995 14:21:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:17.995 14:21:24 -- setup/common.sh@44 -- # parts=() 00:07:17.995 14:21:24 -- setup/common.sh@44 -- # local parts 00:07:17.995 14:21:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:17.995 14:21:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:17.995 14:21:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:17.995 14:21:24 -- setup/common.sh@46 -- # (( part++ )) 00:07:17.995 14:21:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:17.995 14:21:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:17.995 14:21:24 -- setup/common.sh@46 -- # (( part++ )) 00:07:17.995 14:21:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:17.995 14:21:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:17.995 14:21:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:17.995 14:21:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:18.929 Creating new GPT entries in memory. 00:07:18.929 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:18.929 other utilities. 00:07:18.929 14:21:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:18.929 14:21:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:18.929 14:21:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:18.929 14:21:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:18.929 14:21:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:20.299 Creating new GPT entries in memory. 00:07:20.299 The operation has completed successfully. 00:07:20.299 14:21:26 -- setup/common.sh@57 -- # (( part++ )) 00:07:20.299 14:21:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:20.299 14:21:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:20.300 14:21:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:20.300 14:21:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:21.232 The operation has completed successfully. 00:07:21.232 14:21:27 -- setup/common.sh@57 -- # (( part++ )) 00:07:21.232 14:21:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:21.232 14:21:27 -- setup/common.sh@62 -- # wait 54375 00:07:21.232 14:21:27 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:21.232 14:21:27 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.232 14:21:27 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:21.232 14:21:27 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:21.232 14:21:27 -- setup/devices.sh@160 -- # for t in {1..5} 00:07:21.232 14:21:27 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:21.232 14:21:27 -- setup/devices.sh@161 -- # break 00:07:21.232 14:21:27 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:21.232 14:21:27 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:21.232 14:21:27 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:21.232 14:21:27 -- setup/devices.sh@166 -- # dm=dm-0 00:07:21.232 14:21:27 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:21.232 14:21:27 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:21.232 14:21:27 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.232 14:21:27 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:21.232 14:21:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.232 14:21:27 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:21.232 14:21:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:21.232 14:21:28 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.232 14:21:28 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:21.232 14:21:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:21.232 14:21:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:21.232 14:21:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.232 14:21:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:21.232 14:21:28 -- setup/devices.sh@53 -- # local found=0 00:07:21.232 14:21:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:21.232 14:21:28 -- setup/devices.sh@56 -- # : 00:07:21.232 14:21:28 -- setup/devices.sh@59 -- # local pci status 00:07:21.232 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.232 14:21:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:21.232 14:21:28 -- setup/devices.sh@47 -- # setup output config 00:07:21.232 14:21:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:21.232 14:21:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:21.232 14:21:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:21.232 14:21:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:21.232 14:21:28 -- setup/devices.sh@63 -- # found=1 00:07:21.232 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.232 14:21:28 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:21.232 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.798 14:21:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:21.798 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.798 14:21:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:21.798 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.798 14:21:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:21.798 14:21:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:21.798 14:21:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.798 14:21:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:21.798 14:21:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:21.798 14:21:28 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:21.799 14:21:28 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:21.799 14:21:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:21.799 14:21:28 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:21.799 14:21:28 -- setup/devices.sh@50 -- # local mount_point= 00:07:21.799 14:21:28 -- setup/devices.sh@51 -- # local test_file= 00:07:21.799 14:21:28 -- setup/devices.sh@53 -- # local found=0 00:07:21.799 14:21:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:21.799 14:21:28 -- setup/devices.sh@59 -- # local pci status 00:07:21.799 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.799 14:21:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:21.799 14:21:28 -- setup/devices.sh@47 -- # setup output config 00:07:21.799 14:21:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:21.799 14:21:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:22.055 14:21:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.055 14:21:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:22.055 14:21:28 -- setup/devices.sh@63 -- # found=1 00:07:22.055 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.056 14:21:28 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.056 14:21:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.313 14:21:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.313 14:21:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.313 14:21:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.313 14:21:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.569 14:21:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:22.569 14:21:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:22.569 14:21:29 -- setup/devices.sh@68 -- # return 0 00:07:22.569 14:21:29 -- setup/devices.sh@187 -- # cleanup_dm 00:07:22.569 14:21:29 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:22.569 14:21:29 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:22.569 14:21:29 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:22.569 14:21:29 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:22.569 14:21:29 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:22.569 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:22.569 14:21:29 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:22.569 14:21:29 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:22.569 00:07:22.569 real 0m4.550s 00:07:22.569 user 0m0.679s 00:07:22.569 sys 0m0.794s 00:07:22.569 14:21:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.569 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.569 ************************************ 00:07:22.569 END TEST dm_mount 00:07:22.569 ************************************ 00:07:22.569 14:21:29 -- setup/devices.sh@1 -- # cleanup 00:07:22.569 14:21:29 -- setup/devices.sh@11 -- # cleanup_nvme 00:07:22.569 14:21:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.569 14:21:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:22.569 14:21:29 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:22.569 14:21:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:22.569 14:21:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:22.827 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:22.827 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:22.827 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:22.827 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:22.827 14:21:29 -- setup/devices.sh@12 -- # cleanup_dm 00:07:22.827 14:21:29 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:22.827 14:21:29 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:22.827 14:21:29 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:22.827 14:21:29 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:22.827 14:21:29 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:22.827 14:21:29 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:22.827 00:07:22.827 real 0m11.119s 00:07:22.827 user 0m2.820s 00:07:22.827 sys 0m2.653s 00:07:22.827 14:21:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.827 ************************************ 00:07:22.827 END TEST devices 00:07:22.827 ************************************ 00:07:22.827 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.827 00:07:22.827 real 0m23.251s 00:07:22.827 user 0m8.416s 00:07:22.827 sys 0m9.277s 00:07:22.827 14:21:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.827 ************************************ 00:07:22.827 END TEST setup.sh 00:07:22.827 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.827 ************************************ 00:07:22.827 14:21:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:23.085 Hugepages 00:07:23.085 node hugesize free / total 00:07:23.085 node0 1048576kB 0 / 0 00:07:23.085 node0 2048kB 2048 / 2048 00:07:23.085 00:07:23.085 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:23.085 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:23.343 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:23.343 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:23.343 14:21:30 -- spdk/autotest.sh@128 -- # uname -s 00:07:23.343 14:21:30 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:07:23.343 14:21:30 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:07:23.343 14:21:30 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:23.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.187 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:24.187 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:07:24.187 14:21:31 -- common/autotest_common.sh@1527 -- # sleep 1 00:07:25.120 14:21:32 -- common/autotest_common.sh@1528 -- # bdfs=() 00:07:25.121 14:21:32 -- common/autotest_common.sh@1528 -- # local bdfs 00:07:25.121 14:21:32 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:07:25.121 14:21:32 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:07:25.121 14:21:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:07:25.121 14:21:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:07:25.121 14:21:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:25.121 14:21:32 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:25.121 14:21:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:07:25.382 14:21:32 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:07:25.382 14:21:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:25.382 14:21:32 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:25.639 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.639 Waiting for block devices as requested 00:07:25.639 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.897 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.897 14:21:32 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:07:25.897 14:21:32 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:07:25.897 14:21:32 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:25.897 14:21:32 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:07:25.897 14:21:32 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:07:25.897 14:21:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1540 -- # grep oacs 00:07:25.897 14:21:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.897 14:21:32 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:07:25.897 14:21:32 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:07:25.897 14:21:32 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:07:25.897 14:21:32 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:07:25.897 14:21:32 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:07:25.897 14:21:32 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:07:25.897 14:21:32 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:07:25.897 14:21:32 -- common/autotest_common.sh@1552 -- # continue 00:07:25.897 14:21:32 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:07:25.897 14:21:32 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:07:25.897 14:21:32 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:07:25.898 14:21:32 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:07:25.898 14:21:32 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:07:25.898 14:21:32 -- common/autotest_common.sh@1540 -- # grep oacs 00:07:25.898 14:21:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.898 14:21:32 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:07:25.898 14:21:32 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:07:25.898 14:21:32 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:07:25.898 14:21:32 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:07:25.898 14:21:32 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:07:25.898 14:21:32 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:07:25.898 14:21:32 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:07:25.898 14:21:32 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:07:25.898 14:21:32 -- common/autotest_common.sh@1552 -- # continue 00:07:25.898 14:21:32 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:07:25.898 14:21:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.898 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:25.898 14:21:32 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:07:25.898 14:21:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.898 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:25.898 14:21:32 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:26.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:26.832 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.832 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:07:27.089 14:21:33 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:07:27.089 14:21:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.089 14:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:27.089 14:21:33 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:07:27.089 14:21:33 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:07:27.090 14:21:33 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:07:27.090 14:21:33 -- common/autotest_common.sh@1572 -- # bdfs=() 00:07:27.090 14:21:33 -- common/autotest_common.sh@1572 -- # local bdfs 00:07:27.090 14:21:33 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:07:27.090 14:21:33 -- common/autotest_common.sh@1508 -- # bdfs=() 00:07:27.090 14:21:33 -- common/autotest_common.sh@1508 -- # local bdfs 00:07:27.090 14:21:33 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:27.090 14:21:33 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:27.090 14:21:33 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:07:27.090 14:21:33 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:07:27.090 14:21:33 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:27.090 14:21:33 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:07:27.090 14:21:33 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:07:27.090 14:21:33 -- common/autotest_common.sh@1575 -- # device=0x0010 00:07:27.090 14:21:33 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.090 14:21:33 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:07:27.090 14:21:33 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:07:27.090 14:21:33 -- common/autotest_common.sh@1575 -- # device=0x0010 00:07:27.090 14:21:33 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.090 14:21:33 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:07:27.090 14:21:33 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:07:27.090 14:21:33 -- common/autotest_common.sh@1588 -- # return 0 00:07:27.090 14:21:33 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:07:27.090 14:21:33 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:07:27.090 14:21:33 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:07:27.090 14:21:33 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:07:27.090 14:21:33 -- spdk/autotest.sh@160 -- # timing_enter lib 00:07:27.090 14:21:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.090 14:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:27.090 14:21:33 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.090 14:21:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.090 14:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.090 14:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:27.090 ************************************ 00:07:27.090 START TEST env 00:07:27.090 ************************************ 00:07:27.090 14:21:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.090 * Looking for test storage... 00:07:27.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:27.090 14:21:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:27.090 14:21:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:27.090 14:21:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:27.347 14:21:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:27.347 14:21:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:27.347 14:21:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:27.347 14:21:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:27.347 14:21:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:27.347 14:21:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:27.347 14:21:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.347 14:21:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:27.347 14:21:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:27.347 14:21:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:27.347 14:21:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:27.347 14:21:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:27.347 14:21:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:27.347 14:21:34 -- scripts/common.sh@344 -- # : 1 00:07:27.347 14:21:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:27.347 14:21:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.347 14:21:34 -- scripts/common.sh@364 -- # decimal 1 00:07:27.347 14:21:34 -- scripts/common.sh@352 -- # local d=1 00:07:27.347 14:21:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.347 14:21:34 -- scripts/common.sh@354 -- # echo 1 00:07:27.347 14:21:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:27.347 14:21:34 -- scripts/common.sh@365 -- # decimal 2 00:07:27.347 14:21:34 -- scripts/common.sh@352 -- # local d=2 00:07:27.347 14:21:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.347 14:21:34 -- scripts/common.sh@354 -- # echo 2 00:07:27.347 14:21:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:27.347 14:21:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:27.347 14:21:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:27.347 14:21:34 -- scripts/common.sh@367 -- # return 0 00:07:27.347 14:21:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.347 14:21:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:27.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.347 --rc genhtml_branch_coverage=1 00:07:27.347 --rc genhtml_function_coverage=1 00:07:27.347 --rc genhtml_legend=1 00:07:27.347 --rc geninfo_all_blocks=1 00:07:27.347 --rc geninfo_unexecuted_blocks=1 00:07:27.347 00:07:27.347 ' 00:07:27.347 14:21:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:27.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.347 --rc genhtml_branch_coverage=1 00:07:27.347 --rc genhtml_function_coverage=1 00:07:27.347 --rc genhtml_legend=1 00:07:27.347 --rc geninfo_all_blocks=1 00:07:27.347 --rc geninfo_unexecuted_blocks=1 00:07:27.347 00:07:27.347 ' 00:07:27.347 14:21:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:27.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.347 --rc genhtml_branch_coverage=1 00:07:27.347 --rc genhtml_function_coverage=1 00:07:27.347 --rc genhtml_legend=1 00:07:27.347 --rc geninfo_all_blocks=1 00:07:27.347 --rc geninfo_unexecuted_blocks=1 00:07:27.347 00:07:27.347 ' 00:07:27.347 14:21:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:27.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.347 --rc genhtml_branch_coverage=1 00:07:27.347 --rc genhtml_function_coverage=1 00:07:27.347 --rc genhtml_legend=1 00:07:27.347 --rc geninfo_all_blocks=1 00:07:27.347 --rc geninfo_unexecuted_blocks=1 00:07:27.347 00:07:27.347 ' 00:07:27.347 14:21:34 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.347 14:21:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.347 14:21:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.347 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.347 ************************************ 00:07:27.347 START TEST env_memory 00:07:27.347 ************************************ 00:07:27.347 14:21:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.347 00:07:27.347 00:07:27.347 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.347 http://cunit.sourceforge.net/ 00:07:27.347 00:07:27.347 00:07:27.347 Suite: memory 00:07:27.347 Test: alloc and free memory map ...[2024-12-06 14:21:34.147861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:27.347 passed 00:07:27.347 Test: mem map translation ...[2024-12-06 14:21:34.173041] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:27.347 [2024-12-06 14:21:34.173138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:27.347 [2024-12-06 14:21:34.173194] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:27.347 [2024-12-06 14:21:34.173213] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:27.347 passed 00:07:27.347 Test: mem map registration ...[2024-12-06 14:21:34.225081] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:27.347 [2024-12-06 14:21:34.225198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:27.347 passed 00:07:27.347 Test: mem map adjacent registrations ...passed 00:07:27.347 00:07:27.347 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.347 suites 1 1 n/a 0 0 00:07:27.347 tests 4 4 4 0 0 00:07:27.347 asserts 152 152 152 0 n/a 00:07:27.347 00:07:27.347 Elapsed time = 0.172 seconds 00:07:27.347 00:07:27.347 real 0m0.193s 00:07:27.347 user 0m0.175s 00:07:27.348 sys 0m0.013s 00:07:27.348 14:21:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.348 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.348 ************************************ 00:07:27.348 END TEST env_memory 00:07:27.348 ************************************ 00:07:27.604 14:21:34 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:27.604 14:21:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.604 14:21:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.604 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.604 ************************************ 00:07:27.604 START TEST env_vtophys 00:07:27.604 ************************************ 00:07:27.604 14:21:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:27.604 EAL: lib.eal log level changed from notice to debug 00:07:27.604 EAL: Detected lcore 0 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 1 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 2 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 3 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 4 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 5 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 6 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 7 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 8 as core 0 on socket 0 00:07:27.604 EAL: Detected lcore 9 as core 0 on socket 0 00:07:27.604 EAL: Maximum logical cores by configuration: 128 00:07:27.604 EAL: Detected CPU lcores: 10 00:07:27.604 EAL: Detected NUMA nodes: 1 00:07:27.604 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:27.604 EAL: Detected shared linkage of DPDK 00:07:27.604 EAL: No shared files mode enabled, IPC will be disabled 00:07:27.604 EAL: Selected IOVA mode 'PA' 00:07:27.605 EAL: Probing VFIO support... 00:07:27.605 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:27.605 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:27.605 EAL: Ask a virtual area of 0x2e000 bytes 00:07:27.605 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:27.605 EAL: Setting up physically contiguous memory... 00:07:27.605 EAL: Setting maximum number of open files to 524288 00:07:27.605 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:27.605 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:27.605 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.605 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:27.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.605 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.605 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:27.605 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:27.605 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.605 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:27.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.605 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.605 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:27.605 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:27.605 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.605 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:27.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.605 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.605 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:27.605 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:27.605 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.605 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:27.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.605 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.605 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:27.605 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:27.605 EAL: Hugepages will be freed exactly as allocated. 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: TSC frequency is ~2200000 KHz 00:07:27.605 EAL: Main lcore 0 is ready (tid=7fbca067aa00;cpuset=[0]) 00:07:27.605 EAL: Trying to obtain current memory policy. 00:07:27.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.605 EAL: Restoring previous memory policy: 0 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was expanded by 2MB 00:07:27.605 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:27.605 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:27.605 EAL: Mem event callback 'spdk:(nil)' registered 00:07:27.605 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:27.605 00:07:27.605 00:07:27.605 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.605 http://cunit.sourceforge.net/ 00:07:27.605 00:07:27.605 00:07:27.605 Suite: components_suite 00:07:27.605 Test: vtophys_malloc_test ...passed 00:07:27.605 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:27.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.605 EAL: Restoring previous memory policy: 4 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was expanded by 4MB 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was shrunk by 4MB 00:07:27.605 EAL: Trying to obtain current memory policy. 00:07:27.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.605 EAL: Restoring previous memory policy: 4 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was expanded by 6MB 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was shrunk by 6MB 00:07:27.605 EAL: Trying to obtain current memory policy. 00:07:27.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.605 EAL: Restoring previous memory policy: 4 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was expanded by 10MB 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was shrunk by 10MB 00:07:27.605 EAL: Trying to obtain current memory policy. 00:07:27.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.605 EAL: Restoring previous memory policy: 4 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was expanded by 18MB 00:07:27.605 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.605 EAL: request: mp_malloc_sync 00:07:27.605 EAL: No shared files mode enabled, IPC is disabled 00:07:27.605 EAL: Heap on socket 0 was shrunk by 18MB 00:07:27.605 EAL: Trying to obtain current memory policy. 00:07:27.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.876 EAL: Restoring previous memory policy: 4 00:07:27.876 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.876 EAL: request: mp_malloc_sync 00:07:27.876 EAL: No shared files mode enabled, IPC is disabled 00:07:27.876 EAL: Heap on socket 0 was expanded by 34MB 00:07:27.876 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.876 EAL: request: mp_malloc_sync 00:07:27.876 EAL: No shared files mode enabled, IPC is disabled 00:07:27.876 EAL: Heap on socket 0 was shrunk by 34MB 00:07:27.876 EAL: Trying to obtain current memory policy. 00:07:27.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.876 EAL: Restoring previous memory policy: 4 00:07:27.876 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.876 EAL: request: mp_malloc_sync 00:07:27.876 EAL: No shared files mode enabled, IPC is disabled 00:07:27.876 EAL: Heap on socket 0 was expanded by 66MB 00:07:27.876 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.876 EAL: request: mp_malloc_sync 00:07:27.876 EAL: No shared files mode enabled, IPC is disabled 00:07:27.876 EAL: Heap on socket 0 was shrunk by 66MB 00:07:27.876 EAL: Trying to obtain current memory policy. 00:07:27.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.135 EAL: Restoring previous memory policy: 4 00:07:28.135 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.135 EAL: request: mp_malloc_sync 00:07:28.135 EAL: No shared files mode enabled, IPC is disabled 00:07:28.135 EAL: Heap on socket 0 was expanded by 130MB 00:07:28.135 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.392 EAL: request: mp_malloc_sync 00:07:28.392 EAL: No shared files mode enabled, IPC is disabled 00:07:28.392 EAL: Heap on socket 0 was shrunk by 130MB 00:07:28.392 EAL: Trying to obtain current memory policy. 00:07:28.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.650 EAL: Restoring previous memory policy: 4 00:07:28.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.650 EAL: request: mp_malloc_sync 00:07:28.650 EAL: No shared files mode enabled, IPC is disabled 00:07:28.650 EAL: Heap on socket 0 was expanded by 258MB 00:07:28.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.908 EAL: request: mp_malloc_sync 00:07:28.908 EAL: No shared files mode enabled, IPC is disabled 00:07:28.908 EAL: Heap on socket 0 was shrunk by 258MB 00:07:28.908 EAL: Trying to obtain current memory policy. 00:07:28.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.474 EAL: Restoring previous memory policy: 4 00:07:29.474 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.474 EAL: request: mp_malloc_sync 00:07:29.474 EAL: No shared files mode enabled, IPC is disabled 00:07:29.474 EAL: Heap on socket 0 was expanded by 514MB 00:07:29.732 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.989 EAL: request: mp_malloc_sync 00:07:29.989 EAL: No shared files mode enabled, IPC is disabled 00:07:29.989 EAL: Heap on socket 0 was shrunk by 514MB 00:07:29.989 EAL: Trying to obtain current memory policy. 00:07:29.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.925 EAL: Restoring previous memory policy: 4 00:07:30.925 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.925 EAL: request: mp_malloc_sync 00:07:30.925 EAL: No shared files mode enabled, IPC is disabled 00:07:30.925 EAL: Heap on socket 0 was expanded by 1026MB 00:07:31.860 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.426 EAL: request: mp_malloc_sync 00:07:32.426 EAL: No shared files mode enabled, IPC is disabled 00:07:32.426 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:32.426 passed 00:07:32.426 00:07:32.426 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.426 suites 1 1 n/a 0 0 00:07:32.426 tests 2 2 2 0 0 00:07:32.426 asserts 5204 5204 5204 0 n/a 00:07:32.426 00:07:32.426 Elapsed time = 4.736 seconds 00:07:32.426 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.426 EAL: request: mp_malloc_sync 00:07:32.426 EAL: No shared files mode enabled, IPC is disabled 00:07:32.426 EAL: Heap on socket 0 was shrunk by 2MB 00:07:32.426 EAL: No shared files mode enabled, IPC is disabled 00:07:32.426 EAL: No shared files mode enabled, IPC is disabled 00:07:32.426 EAL: No shared files mode enabled, IPC is disabled 00:07:32.426 00:07:32.426 real 0m4.995s 00:07:32.426 user 0m3.167s 00:07:32.426 sys 0m1.639s 00:07:32.426 14:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.426 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 ************************************ 00:07:32.426 END TEST env_vtophys 00:07:32.426 ************************************ 00:07:32.426 14:21:39 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:32.426 14:21:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.426 14:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.426 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 ************************************ 00:07:32.426 START TEST env_pci 00:07:32.426 ************************************ 00:07:32.426 14:21:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:32.684 00:07:32.684 00:07:32.684 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.684 http://cunit.sourceforge.net/ 00:07:32.684 00:07:32.684 00:07:32.684 Suite: pci 00:07:32.684 Test: pci_hook ...[2024-12-06 14:21:39.400786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55552 has claimed it 00:07:32.684 passed 00:07:32.684 00:07:32.684 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.684 suites 1 1 n/a 0 0 00:07:32.684 EAL: Cannot find device (10000:00:01.0) 00:07:32.684 EAL: Failed to attach device on primary process 00:07:32.684 tests 1 1 1 0 0 00:07:32.684 asserts 25 25 25 0 n/a 00:07:32.684 00:07:32.684 Elapsed time = 0.004 seconds 00:07:32.684 00:07:32.684 real 0m0.023s 00:07:32.684 user 0m0.011s 00:07:32.684 sys 0m0.012s 00:07:32.684 14:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.684 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.684 ************************************ 00:07:32.684 END TEST env_pci 00:07:32.684 ************************************ 00:07:32.684 14:21:39 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:32.684 14:21:39 -- env/env.sh@15 -- # uname 00:07:32.684 14:21:39 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:32.684 14:21:39 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:32.684 14:21:39 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:32.685 14:21:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:32.685 14:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.685 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.685 ************************************ 00:07:32.685 START TEST env_dpdk_post_init 00:07:32.685 ************************************ 00:07:32.685 14:21:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:32.685 EAL: Detected CPU lcores: 10 00:07:32.685 EAL: Detected NUMA nodes: 1 00:07:32.685 EAL: Detected shared linkage of DPDK 00:07:32.685 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:32.685 EAL: Selected IOVA mode 'PA' 00:07:32.685 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:32.685 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:32.685 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:07:32.942 Starting DPDK initialization... 00:07:32.942 Starting SPDK post initialization... 00:07:32.942 SPDK NVMe probe 00:07:32.942 Attaching to 0000:00:06.0 00:07:32.942 Attaching to 0000:00:07.0 00:07:32.942 Attached to 0000:00:06.0 00:07:32.942 Attached to 0000:00:07.0 00:07:32.942 Cleaning up... 00:07:32.942 00:07:32.942 real 0m0.195s 00:07:32.942 user 0m0.045s 00:07:32.942 sys 0m0.050s 00:07:32.942 14:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.942 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.942 ************************************ 00:07:32.942 END TEST env_dpdk_post_init 00:07:32.942 ************************************ 00:07:32.942 14:21:39 -- env/env.sh@26 -- # uname 00:07:32.942 14:21:39 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:32.943 14:21:39 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:32.943 14:21:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.943 14:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.943 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.943 ************************************ 00:07:32.943 START TEST env_mem_callbacks 00:07:32.943 ************************************ 00:07:32.943 14:21:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:32.943 EAL: Detected CPU lcores: 10 00:07:32.943 EAL: Detected NUMA nodes: 1 00:07:32.943 EAL: Detected shared linkage of DPDK 00:07:32.943 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:32.943 EAL: Selected IOVA mode 'PA' 00:07:32.943 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:32.943 00:07:32.943 00:07:32.943 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.943 http://cunit.sourceforge.net/ 00:07:32.943 00:07:32.943 00:07:32.943 Suite: memory 00:07:32.943 Test: test ... 00:07:32.943 register 0x200000200000 2097152 00:07:32.943 malloc 3145728 00:07:32.943 register 0x200000400000 4194304 00:07:32.943 buf 0x200000500000 len 3145728 PASSED 00:07:32.943 malloc 64 00:07:32.943 buf 0x2000004fff40 len 64 PASSED 00:07:32.943 malloc 4194304 00:07:32.943 register 0x200000800000 6291456 00:07:32.943 buf 0x200000a00000 len 4194304 PASSED 00:07:32.943 free 0x200000500000 3145728 00:07:32.943 free 0x2000004fff40 64 00:07:32.943 unregister 0x200000400000 4194304 PASSED 00:07:32.943 free 0x200000a00000 4194304 00:07:32.943 unregister 0x200000800000 6291456 PASSED 00:07:32.943 malloc 8388608 00:07:32.943 register 0x200000400000 10485760 00:07:32.943 buf 0x200000600000 len 8388608 PASSED 00:07:32.943 free 0x200000600000 8388608 00:07:32.943 unregister 0x200000400000 10485760 PASSED 00:07:32.943 passed 00:07:32.943 00:07:32.943 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.943 suites 1 1 n/a 0 0 00:07:32.943 tests 1 1 1 0 0 00:07:32.943 asserts 15 15 15 0 n/a 00:07:32.943 00:07:32.943 Elapsed time = 0.014 seconds 00:07:32.943 00:07:32.943 real 0m0.151s 00:07:32.943 user 0m0.015s 00:07:32.943 sys 0m0.034s 00:07:32.943 14:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.943 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.943 ************************************ 00:07:32.943 END TEST env_mem_callbacks 00:07:32.943 ************************************ 00:07:32.943 00:07:32.943 real 0m5.964s 00:07:32.943 user 0m3.576s 00:07:32.943 sys 0m1.989s 00:07:32.943 14:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.943 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.943 ************************************ 00:07:32.943 END TEST env 00:07:32.943 ************************************ 00:07:33.200 14:21:39 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:33.200 14:21:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.200 14:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.200 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:07:33.200 ************************************ 00:07:33.200 START TEST rpc 00:07:33.200 ************************************ 00:07:33.200 14:21:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:33.200 * Looking for test storage... 00:07:33.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:33.200 14:21:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:33.200 14:21:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:33.200 14:21:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:33.200 14:21:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:33.200 14:21:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:33.200 14:21:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:33.200 14:21:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:33.200 14:21:40 -- scripts/common.sh@335 -- # IFS=.-: 00:07:33.200 14:21:40 -- scripts/common.sh@335 -- # read -ra ver1 00:07:33.200 14:21:40 -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.200 14:21:40 -- scripts/common.sh@336 -- # read -ra ver2 00:07:33.200 14:21:40 -- scripts/common.sh@337 -- # local 'op=<' 00:07:33.200 14:21:40 -- scripts/common.sh@339 -- # ver1_l=2 00:07:33.200 14:21:40 -- scripts/common.sh@340 -- # ver2_l=1 00:07:33.200 14:21:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:33.200 14:21:40 -- scripts/common.sh@343 -- # case "$op" in 00:07:33.200 14:21:40 -- scripts/common.sh@344 -- # : 1 00:07:33.200 14:21:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:33.200 14:21:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.200 14:21:40 -- scripts/common.sh@364 -- # decimal 1 00:07:33.200 14:21:40 -- scripts/common.sh@352 -- # local d=1 00:07:33.200 14:21:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.200 14:21:40 -- scripts/common.sh@354 -- # echo 1 00:07:33.200 14:21:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:33.200 14:21:40 -- scripts/common.sh@365 -- # decimal 2 00:07:33.200 14:21:40 -- scripts/common.sh@352 -- # local d=2 00:07:33.200 14:21:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.200 14:21:40 -- scripts/common.sh@354 -- # echo 2 00:07:33.200 14:21:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:33.200 14:21:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:33.200 14:21:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:33.200 14:21:40 -- scripts/common.sh@367 -- # return 0 00:07:33.200 14:21:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.200 14:21:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.200 --rc genhtml_branch_coverage=1 00:07:33.200 --rc genhtml_function_coverage=1 00:07:33.200 --rc genhtml_legend=1 00:07:33.200 --rc geninfo_all_blocks=1 00:07:33.200 --rc geninfo_unexecuted_blocks=1 00:07:33.200 00:07:33.200 ' 00:07:33.200 14:21:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.200 --rc genhtml_branch_coverage=1 00:07:33.200 --rc genhtml_function_coverage=1 00:07:33.200 --rc genhtml_legend=1 00:07:33.200 --rc geninfo_all_blocks=1 00:07:33.200 --rc geninfo_unexecuted_blocks=1 00:07:33.200 00:07:33.200 ' 00:07:33.200 14:21:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.200 --rc genhtml_branch_coverage=1 00:07:33.200 --rc genhtml_function_coverage=1 00:07:33.200 --rc genhtml_legend=1 00:07:33.200 --rc geninfo_all_blocks=1 00:07:33.200 --rc geninfo_unexecuted_blocks=1 00:07:33.200 00:07:33.200 ' 00:07:33.200 14:21:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.200 --rc genhtml_branch_coverage=1 00:07:33.200 --rc genhtml_function_coverage=1 00:07:33.200 --rc genhtml_legend=1 00:07:33.200 --rc geninfo_all_blocks=1 00:07:33.200 --rc geninfo_unexecuted_blocks=1 00:07:33.200 00:07:33.200 ' 00:07:33.200 14:21:40 -- rpc/rpc.sh@65 -- # spdk_pid=55674 00:07:33.200 14:21:40 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.200 14:21:40 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:33.200 14:21:40 -- rpc/rpc.sh@67 -- # waitforlisten 55674 00:07:33.200 14:21:40 -- common/autotest_common.sh@829 -- # '[' -z 55674 ']' 00:07:33.200 14:21:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.200 14:21:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.200 14:21:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.200 14:21:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.200 14:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 [2024-12-06 14:21:40.230498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.457 [2024-12-06 14:21:40.230675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55674 ] 00:07:33.457 [2024-12-06 14:21:40.367211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.714 [2024-12-06 14:21:40.619986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.714 [2024-12-06 14:21:40.620187] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:33.714 [2024-12-06 14:21:40.620203] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55674' to capture a snapshot of events at runtime. 00:07:33.714 [2024-12-06 14:21:40.620213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55674 for offline analysis/debug. 00:07:33.714 [2024-12-06 14:21:40.620266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.610 14:21:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.610 14:21:42 -- common/autotest_common.sh@862 -- # return 0 00:07:35.610 14:21:42 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.610 14:21:42 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.610 14:21:42 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:35.610 14:21:42 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:35.610 14:21:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.610 14:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.610 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 ************************************ 00:07:35.610 START TEST rpc_integrity 00:07:35.610 ************************************ 00:07:35.610 14:21:42 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:07:35.610 14:21:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:35.610 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.610 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.610 14:21:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:35.610 14:21:42 -- rpc/rpc.sh@13 -- # jq length 00:07:35.610 14:21:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:35.610 14:21:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:35.610 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.610 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.610 14:21:42 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:35.610 14:21:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:35.610 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.610 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.610 14:21:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:35.610 { 00:07:35.610 "aliases": [ 00:07:35.610 "6b31db1a-6efd-4a51-9657-f2c36a853dbc" 00:07:35.610 ], 00:07:35.610 "assigned_rate_limits": { 00:07:35.610 "r_mbytes_per_sec": 0, 00:07:35.610 "rw_ios_per_sec": 0, 00:07:35.610 "rw_mbytes_per_sec": 0, 00:07:35.610 "w_mbytes_per_sec": 0 00:07:35.610 }, 00:07:35.610 "block_size": 512, 00:07:35.610 "claimed": false, 00:07:35.610 "driver_specific": {}, 00:07:35.610 "memory_domains": [ 00:07:35.610 { 00:07:35.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.610 "dma_device_type": 2 00:07:35.610 } 00:07:35.610 ], 00:07:35.610 "name": "Malloc0", 00:07:35.610 "num_blocks": 16384, 00:07:35.610 "product_name": "Malloc disk", 00:07:35.610 "supported_io_types": { 00:07:35.610 "abort": true, 00:07:35.610 "compare": false, 00:07:35.610 "compare_and_write": false, 00:07:35.610 "flush": true, 00:07:35.610 "nvme_admin": false, 00:07:35.610 "nvme_io": false, 00:07:35.610 "read": true, 00:07:35.610 "reset": true, 00:07:35.610 "unmap": true, 00:07:35.610 "write": true, 00:07:35.610 "write_zeroes": true 00:07:35.610 }, 00:07:35.610 "uuid": "6b31db1a-6efd-4a51-9657-f2c36a853dbc", 00:07:35.610 "zoned": false 00:07:35.610 } 00:07:35.610 ]' 00:07:35.610 14:21:42 -- rpc/rpc.sh@17 -- # jq length 00:07:35.610 14:21:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:35.610 14:21:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:35.610 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.610 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 [2024-12-06 14:21:42.313359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:35.610 [2024-12-06 14:21:42.313478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.610 [2024-12-06 14:21:42.313506] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cae880 00:07:35.610 [2024-12-06 14:21:42.313519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.610 [2024-12-06 14:21:42.315973] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.610 [2024-12-06 14:21:42.316233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:35.610 Passthru0 00:07:35.610 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.610 14:21:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:35.610 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.610 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.610 14:21:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:35.610 { 00:07:35.610 "aliases": [ 00:07:35.610 "6b31db1a-6efd-4a51-9657-f2c36a853dbc" 00:07:35.610 ], 00:07:35.610 "assigned_rate_limits": { 00:07:35.610 "r_mbytes_per_sec": 0, 00:07:35.610 "rw_ios_per_sec": 0, 00:07:35.610 "rw_mbytes_per_sec": 0, 00:07:35.610 "w_mbytes_per_sec": 0 00:07:35.610 }, 00:07:35.610 "block_size": 512, 00:07:35.610 "claim_type": "exclusive_write", 00:07:35.610 "claimed": true, 00:07:35.610 "driver_specific": {}, 00:07:35.611 "memory_domains": [ 00:07:35.611 { 00:07:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.611 "dma_device_type": 2 00:07:35.611 } 00:07:35.611 ], 00:07:35.611 "name": "Malloc0", 00:07:35.611 "num_blocks": 16384, 00:07:35.611 "product_name": "Malloc disk", 00:07:35.611 "supported_io_types": { 00:07:35.611 "abort": true, 00:07:35.611 "compare": false, 00:07:35.611 "compare_and_write": false, 00:07:35.611 "flush": true, 00:07:35.611 "nvme_admin": false, 00:07:35.611 "nvme_io": false, 00:07:35.611 "read": true, 00:07:35.611 "reset": true, 00:07:35.611 "unmap": true, 00:07:35.611 "write": true, 00:07:35.611 "write_zeroes": true 00:07:35.611 }, 00:07:35.611 "uuid": "6b31db1a-6efd-4a51-9657-f2c36a853dbc", 00:07:35.611 "zoned": false 00:07:35.611 }, 00:07:35.611 { 00:07:35.611 "aliases": [ 00:07:35.611 "65514b2a-b57b-55dc-ba31-3521357e7b86" 00:07:35.611 ], 00:07:35.611 "assigned_rate_limits": { 00:07:35.611 "r_mbytes_per_sec": 0, 00:07:35.611 "rw_ios_per_sec": 0, 00:07:35.611 "rw_mbytes_per_sec": 0, 00:07:35.611 "w_mbytes_per_sec": 0 00:07:35.611 }, 00:07:35.611 "block_size": 512, 00:07:35.611 "claimed": false, 00:07:35.611 "driver_specific": { 00:07:35.611 "passthru": { 00:07:35.611 "base_bdev_name": "Malloc0", 00:07:35.611 "name": "Passthru0" 00:07:35.611 } 00:07:35.611 }, 00:07:35.611 "memory_domains": [ 00:07:35.611 { 00:07:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.611 "dma_device_type": 2 00:07:35.611 } 00:07:35.611 ], 00:07:35.611 "name": "Passthru0", 00:07:35.611 "num_blocks": 16384, 00:07:35.611 "product_name": "passthru", 00:07:35.611 "supported_io_types": { 00:07:35.611 "abort": true, 00:07:35.611 "compare": false, 00:07:35.611 "compare_and_write": false, 00:07:35.611 "flush": true, 00:07:35.611 "nvme_admin": false, 00:07:35.611 "nvme_io": false, 00:07:35.611 "read": true, 00:07:35.611 "reset": true, 00:07:35.611 "unmap": true, 00:07:35.611 "write": true, 00:07:35.611 "write_zeroes": true 00:07:35.611 }, 00:07:35.611 "uuid": "65514b2a-b57b-55dc-ba31-3521357e7b86", 00:07:35.611 "zoned": false 00:07:35.611 } 00:07:35.611 ]' 00:07:35.611 14:21:42 -- rpc/rpc.sh@21 -- # jq length 00:07:35.611 14:21:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:35.611 14:21:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:35.611 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.611 14:21:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:35.611 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.611 14:21:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:35.611 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.611 14:21:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:35.611 14:21:42 -- rpc/rpc.sh@26 -- # jq length 00:07:35.611 ************************************ 00:07:35.611 END TEST rpc_integrity 00:07:35.611 ************************************ 00:07:35.611 14:21:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:35.611 00:07:35.611 real 0m0.356s 00:07:35.611 user 0m0.225s 00:07:35.611 sys 0m0.042s 00:07:35.611 14:21:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 14:21:42 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:35.611 14:21:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.611 14:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 ************************************ 00:07:35.611 START TEST rpc_plugins 00:07:35.611 ************************************ 00:07:35.611 14:21:42 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:07:35.611 14:21:42 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:35.611 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.611 14:21:42 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:35.611 14:21:42 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:35.611 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.611 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.867 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.867 14:21:42 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:35.867 { 00:07:35.867 "aliases": [ 00:07:35.867 "3ec1999a-ab30-471a-8f2f-3234486d0cd9" 00:07:35.867 ], 00:07:35.868 "assigned_rate_limits": { 00:07:35.868 "r_mbytes_per_sec": 0, 00:07:35.868 "rw_ios_per_sec": 0, 00:07:35.868 "rw_mbytes_per_sec": 0, 00:07:35.868 "w_mbytes_per_sec": 0 00:07:35.868 }, 00:07:35.868 "block_size": 4096, 00:07:35.868 "claimed": false, 00:07:35.868 "driver_specific": {}, 00:07:35.868 "memory_domains": [ 00:07:35.868 { 00:07:35.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.868 "dma_device_type": 2 00:07:35.868 } 00:07:35.868 ], 00:07:35.868 "name": "Malloc1", 00:07:35.868 "num_blocks": 256, 00:07:35.868 "product_name": "Malloc disk", 00:07:35.868 "supported_io_types": { 00:07:35.868 "abort": true, 00:07:35.868 "compare": false, 00:07:35.868 "compare_and_write": false, 00:07:35.868 "flush": true, 00:07:35.868 "nvme_admin": false, 00:07:35.868 "nvme_io": false, 00:07:35.868 "read": true, 00:07:35.868 "reset": true, 00:07:35.868 "unmap": true, 00:07:35.868 "write": true, 00:07:35.868 "write_zeroes": true 00:07:35.868 }, 00:07:35.868 "uuid": "3ec1999a-ab30-471a-8f2f-3234486d0cd9", 00:07:35.868 "zoned": false 00:07:35.868 } 00:07:35.868 ]' 00:07:35.868 14:21:42 -- rpc/rpc.sh@32 -- # jq length 00:07:35.868 14:21:42 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:35.868 14:21:42 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:35.868 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.868 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.868 14:21:42 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:35.868 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.868 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.868 14:21:42 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:35.868 14:21:42 -- rpc/rpc.sh@36 -- # jq length 00:07:35.868 ************************************ 00:07:35.868 END TEST rpc_plugins 00:07:35.868 ************************************ 00:07:35.868 14:21:42 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:35.868 00:07:35.868 real 0m0.170s 00:07:35.868 user 0m0.107s 00:07:35.868 sys 0m0.019s 00:07:35.868 14:21:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.868 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 14:21:42 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:35.868 14:21:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.868 14:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.868 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 ************************************ 00:07:35.868 START TEST rpc_trace_cmd_test 00:07:35.868 ************************************ 00:07:35.868 14:21:42 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:07:35.868 14:21:42 -- rpc/rpc.sh@40 -- # local info 00:07:35.868 14:21:42 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:35.868 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.868 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.868 14:21:42 -- rpc/rpc.sh@42 -- # info='{ 00:07:35.868 "bdev": { 00:07:35.868 "mask": "0x8", 00:07:35.868 "tpoint_mask": "0xffffffffffffffff" 00:07:35.868 }, 00:07:35.868 "bdev_nvme": { 00:07:35.868 "mask": "0x4000", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "blobfs": { 00:07:35.868 "mask": "0x80", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "dsa": { 00:07:35.868 "mask": "0x200", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "ftl": { 00:07:35.868 "mask": "0x40", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "iaa": { 00:07:35.868 "mask": "0x1000", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "iscsi_conn": { 00:07:35.868 "mask": "0x2", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "nvme_pcie": { 00:07:35.868 "mask": "0x800", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "nvme_tcp": { 00:07:35.868 "mask": "0x2000", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "nvmf_rdma": { 00:07:35.868 "mask": "0x10", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "nvmf_tcp": { 00:07:35.868 "mask": "0x20", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "scsi": { 00:07:35.868 "mask": "0x4", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "thread": { 00:07:35.868 "mask": "0x400", 00:07:35.868 "tpoint_mask": "0x0" 00:07:35.868 }, 00:07:35.868 "tpoint_group_mask": "0x8", 00:07:35.868 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55674" 00:07:35.868 }' 00:07:35.868 14:21:42 -- rpc/rpc.sh@43 -- # jq length 00:07:35.868 14:21:42 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:35.868 14:21:42 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:36.126 14:21:42 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:36.126 14:21:42 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:36.126 14:21:42 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:36.126 14:21:42 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:36.126 14:21:42 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:36.126 14:21:42 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:36.126 ************************************ 00:07:36.126 END TEST rpc_trace_cmd_test 00:07:36.126 ************************************ 00:07:36.126 14:21:43 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:36.126 00:07:36.126 real 0m0.276s 00:07:36.126 user 0m0.227s 00:07:36.126 sys 0m0.036s 00:07:36.126 14:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.126 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.126 14:21:43 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:07:36.126 14:21:43 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:07:36.126 14:21:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.126 14:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.126 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.126 ************************************ 00:07:36.126 START TEST go_rpc 00:07:36.126 ************************************ 00:07:36.126 14:21:43 -- common/autotest_common.sh@1114 -- # go_rpc 00:07:36.126 14:21:43 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:36.383 14:21:43 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:07:36.383 14:21:43 -- rpc/rpc.sh@52 -- # jq length 00:07:36.383 14:21:43 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:07:36.383 14:21:43 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:07:36.383 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.383 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.383 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.383 14:21:43 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:07:36.383 14:21:43 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:36.384 14:21:43 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["aed82671-188e-4b26-9bad-96aed376d0ca"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"aed82671-188e-4b26-9bad-96aed376d0ca","zoned":false}]' 00:07:36.384 14:21:43 -- rpc/rpc.sh@57 -- # jq length 00:07:36.384 14:21:43 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:07:36.384 14:21:43 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:36.384 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.384 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.384 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.384 14:21:43 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:36.384 14:21:43 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:07:36.384 14:21:43 -- rpc/rpc.sh@61 -- # jq length 00:07:36.384 14:21:43 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:07:36.384 00:07:36.384 real 0m0.215s 00:07:36.384 user 0m0.138s 00:07:36.384 sys 0m0.038s 00:07:36.384 14:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.384 ************************************ 00:07:36.384 END TEST go_rpc 00:07:36.384 ************************************ 00:07:36.384 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.384 14:21:43 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:36.384 14:21:43 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:36.384 14:21:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.384 14:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.384 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.384 ************************************ 00:07:36.384 START TEST rpc_daemon_integrity 00:07:36.384 ************************************ 00:07:36.384 14:21:43 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:07:36.641 14:21:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:36.641 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.641 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.641 14:21:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:36.641 14:21:43 -- rpc/rpc.sh@13 -- # jq length 00:07:36.641 14:21:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:36.641 14:21:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:36.641 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.641 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.641 14:21:43 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:07:36.641 14:21:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:36.641 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.641 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.641 14:21:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:36.641 { 00:07:36.641 "aliases": [ 00:07:36.641 "00ace7cc-2edb-49c2-a982-19d31ff1faf9" 00:07:36.641 ], 00:07:36.641 "assigned_rate_limits": { 00:07:36.641 "r_mbytes_per_sec": 0, 00:07:36.641 "rw_ios_per_sec": 0, 00:07:36.641 "rw_mbytes_per_sec": 0, 00:07:36.641 "w_mbytes_per_sec": 0 00:07:36.641 }, 00:07:36.641 "block_size": 512, 00:07:36.641 "claimed": false, 00:07:36.641 "driver_specific": {}, 00:07:36.641 "memory_domains": [ 00:07:36.641 { 00:07:36.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.641 "dma_device_type": 2 00:07:36.641 } 00:07:36.641 ], 00:07:36.641 "name": "Malloc3", 00:07:36.641 "num_blocks": 16384, 00:07:36.641 "product_name": "Malloc disk", 00:07:36.641 "supported_io_types": { 00:07:36.641 "abort": true, 00:07:36.641 "compare": false, 00:07:36.641 "compare_and_write": false, 00:07:36.641 "flush": true, 00:07:36.641 "nvme_admin": false, 00:07:36.641 "nvme_io": false, 00:07:36.641 "read": true, 00:07:36.641 "reset": true, 00:07:36.641 "unmap": true, 00:07:36.641 "write": true, 00:07:36.641 "write_zeroes": true 00:07:36.641 }, 00:07:36.641 "uuid": "00ace7cc-2edb-49c2-a982-19d31ff1faf9", 00:07:36.641 "zoned": false 00:07:36.641 } 00:07:36.641 ]' 00:07:36.641 14:21:43 -- rpc/rpc.sh@17 -- # jq length 00:07:36.641 14:21:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:36.641 14:21:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:07:36.641 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.641 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 [2024-12-06 14:21:43.500236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:36.641 [2024-12-06 14:21:43.500334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.641 [2024-12-06 14:21:43.500357] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e9f680 00:07:36.641 [2024-12-06 14:21:43.500368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.641 [2024-12-06 14:21:43.502501] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.641 [2024-12-06 14:21:43.502539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:36.641 Passthru0 00:07:36.642 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.642 14:21:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:36.642 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.642 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.642 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.642 14:21:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:36.642 { 00:07:36.642 "aliases": [ 00:07:36.642 "00ace7cc-2edb-49c2-a982-19d31ff1faf9" 00:07:36.642 ], 00:07:36.642 "assigned_rate_limits": { 00:07:36.642 "r_mbytes_per_sec": 0, 00:07:36.642 "rw_ios_per_sec": 0, 00:07:36.642 "rw_mbytes_per_sec": 0, 00:07:36.642 "w_mbytes_per_sec": 0 00:07:36.642 }, 00:07:36.642 "block_size": 512, 00:07:36.642 "claim_type": "exclusive_write", 00:07:36.642 "claimed": true, 00:07:36.642 "driver_specific": {}, 00:07:36.642 "memory_domains": [ 00:07:36.642 { 00:07:36.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.642 "dma_device_type": 2 00:07:36.642 } 00:07:36.642 ], 00:07:36.642 "name": "Malloc3", 00:07:36.642 "num_blocks": 16384, 00:07:36.642 "product_name": "Malloc disk", 00:07:36.642 "supported_io_types": { 00:07:36.642 "abort": true, 00:07:36.642 "compare": false, 00:07:36.642 "compare_and_write": false, 00:07:36.642 "flush": true, 00:07:36.642 "nvme_admin": false, 00:07:36.642 "nvme_io": false, 00:07:36.642 "read": true, 00:07:36.642 "reset": true, 00:07:36.642 "unmap": true, 00:07:36.642 "write": true, 00:07:36.642 "write_zeroes": true 00:07:36.642 }, 00:07:36.642 "uuid": "00ace7cc-2edb-49c2-a982-19d31ff1faf9", 00:07:36.642 "zoned": false 00:07:36.642 }, 00:07:36.642 { 00:07:36.642 "aliases": [ 00:07:36.642 "13a31e11-c509-5e2d-8891-a11e8972e3f1" 00:07:36.642 ], 00:07:36.642 "assigned_rate_limits": { 00:07:36.642 "r_mbytes_per_sec": 0, 00:07:36.642 "rw_ios_per_sec": 0, 00:07:36.642 "rw_mbytes_per_sec": 0, 00:07:36.642 "w_mbytes_per_sec": 0 00:07:36.642 }, 00:07:36.642 "block_size": 512, 00:07:36.642 "claimed": false, 00:07:36.642 "driver_specific": { 00:07:36.642 "passthru": { 00:07:36.642 "base_bdev_name": "Malloc3", 00:07:36.642 "name": "Passthru0" 00:07:36.642 } 00:07:36.642 }, 00:07:36.642 "memory_domains": [ 00:07:36.642 { 00:07:36.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.642 "dma_device_type": 2 00:07:36.642 } 00:07:36.642 ], 00:07:36.642 "name": "Passthru0", 00:07:36.642 "num_blocks": 16384, 00:07:36.642 "product_name": "passthru", 00:07:36.642 "supported_io_types": { 00:07:36.642 "abort": true, 00:07:36.642 "compare": false, 00:07:36.642 "compare_and_write": false, 00:07:36.642 "flush": true, 00:07:36.642 "nvme_admin": false, 00:07:36.642 "nvme_io": false, 00:07:36.642 "read": true, 00:07:36.642 "reset": true, 00:07:36.642 "unmap": true, 00:07:36.642 "write": true, 00:07:36.642 "write_zeroes": true 00:07:36.642 }, 00:07:36.642 "uuid": "13a31e11-c509-5e2d-8891-a11e8972e3f1", 00:07:36.642 "zoned": false 00:07:36.642 } 00:07:36.642 ]' 00:07:36.642 14:21:43 -- rpc/rpc.sh@21 -- # jq length 00:07:36.642 14:21:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:36.642 14:21:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:36.642 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.642 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.642 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.642 14:21:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:07:36.642 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.642 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.900 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.900 14:21:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:36.900 14:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.900 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.900 14:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.900 14:21:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:36.900 14:21:43 -- rpc/rpc.sh@26 -- # jq length 00:07:36.900 14:21:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:36.900 00:07:36.900 real 0m0.326s 00:07:36.900 user 0m0.214s 00:07:36.900 sys 0m0.035s 00:07:36.900 14:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.900 ************************************ 00:07:36.900 END TEST rpc_daemon_integrity 00:07:36.900 ************************************ 00:07:36.900 14:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.900 14:21:43 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:36.900 14:21:43 -- rpc/rpc.sh@84 -- # killprocess 55674 00:07:36.900 14:21:43 -- common/autotest_common.sh@936 -- # '[' -z 55674 ']' 00:07:36.900 14:21:43 -- common/autotest_common.sh@940 -- # kill -0 55674 00:07:36.900 14:21:43 -- common/autotest_common.sh@941 -- # uname 00:07:36.900 14:21:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.900 14:21:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55674 00:07:36.900 14:21:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:36.900 14:21:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:36.900 14:21:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55674' 00:07:36.900 killing process with pid 55674 00:07:36.900 14:21:43 -- common/autotest_common.sh@955 -- # kill 55674 00:07:36.900 14:21:43 -- common/autotest_common.sh@960 -- # wait 55674 00:07:38.272 00:07:38.272 real 0m4.866s 00:07:38.272 user 0m5.596s 00:07:38.272 sys 0m1.445s 00:07:38.272 14:21:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.272 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.272 ************************************ 00:07:38.272 END TEST rpc 00:07:38.272 ************************************ 00:07:38.272 14:21:44 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:38.272 14:21:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.272 14:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.272 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.272 ************************************ 00:07:38.272 START TEST rpc_client 00:07:38.272 ************************************ 00:07:38.272 14:21:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:38.272 * Looking for test storage... 00:07:38.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:38.272 14:21:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.272 14:21:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.272 14:21:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:38.272 14:21:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:38.272 14:21:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:38.272 14:21:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:38.272 14:21:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:38.272 14:21:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:38.272 14:21:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:38.272 14:21:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.272 14:21:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:38.272 14:21:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:38.272 14:21:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:38.272 14:21:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:38.272 14:21:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:38.272 14:21:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:38.272 14:21:45 -- scripts/common.sh@344 -- # : 1 00:07:38.272 14:21:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:38.272 14:21:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.272 14:21:45 -- scripts/common.sh@364 -- # decimal 1 00:07:38.272 14:21:45 -- scripts/common.sh@352 -- # local d=1 00:07:38.272 14:21:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.272 14:21:45 -- scripts/common.sh@354 -- # echo 1 00:07:38.272 14:21:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:38.272 14:21:45 -- scripts/common.sh@365 -- # decimal 2 00:07:38.272 14:21:45 -- scripts/common.sh@352 -- # local d=2 00:07:38.272 14:21:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.272 14:21:45 -- scripts/common.sh@354 -- # echo 2 00:07:38.272 14:21:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:38.272 14:21:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:38.272 14:21:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:38.272 14:21:45 -- scripts/common.sh@367 -- # return 0 00:07:38.272 14:21:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.272 14:21:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:38.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.272 --rc genhtml_branch_coverage=1 00:07:38.272 --rc genhtml_function_coverage=1 00:07:38.272 --rc genhtml_legend=1 00:07:38.272 --rc geninfo_all_blocks=1 00:07:38.272 --rc geninfo_unexecuted_blocks=1 00:07:38.272 00:07:38.272 ' 00:07:38.272 14:21:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:38.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.272 --rc genhtml_branch_coverage=1 00:07:38.272 --rc genhtml_function_coverage=1 00:07:38.272 --rc genhtml_legend=1 00:07:38.272 --rc geninfo_all_blocks=1 00:07:38.272 --rc geninfo_unexecuted_blocks=1 00:07:38.272 00:07:38.272 ' 00:07:38.272 14:21:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:38.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.272 --rc genhtml_branch_coverage=1 00:07:38.272 --rc genhtml_function_coverage=1 00:07:38.272 --rc genhtml_legend=1 00:07:38.272 --rc geninfo_all_blocks=1 00:07:38.272 --rc geninfo_unexecuted_blocks=1 00:07:38.272 00:07:38.272 ' 00:07:38.272 14:21:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:38.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.272 --rc genhtml_branch_coverage=1 00:07:38.272 --rc genhtml_function_coverage=1 00:07:38.272 --rc genhtml_legend=1 00:07:38.272 --rc geninfo_all_blocks=1 00:07:38.272 --rc geninfo_unexecuted_blocks=1 00:07:38.272 00:07:38.272 ' 00:07:38.272 14:21:45 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:38.272 OK 00:07:38.272 14:21:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:38.272 00:07:38.272 real 0m0.218s 00:07:38.272 user 0m0.133s 00:07:38.272 sys 0m0.097s 00:07:38.272 14:21:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.272 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.272 ************************************ 00:07:38.272 END TEST rpc_client 00:07:38.272 ************************************ 00:07:38.272 14:21:45 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:38.272 14:21:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.272 14:21:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.272 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.272 ************************************ 00:07:38.272 START TEST json_config 00:07:38.272 ************************************ 00:07:38.272 14:21:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:38.272 14:21:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.272 14:21:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.272 14:21:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:38.530 14:21:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:38.530 14:21:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:38.530 14:21:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:38.530 14:21:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:38.530 14:21:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:38.530 14:21:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:38.530 14:21:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.530 14:21:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:38.530 14:21:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:38.530 14:21:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:38.530 14:21:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:38.530 14:21:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:38.530 14:21:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:38.530 14:21:45 -- scripts/common.sh@344 -- # : 1 00:07:38.530 14:21:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:38.530 14:21:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.530 14:21:45 -- scripts/common.sh@364 -- # decimal 1 00:07:38.530 14:21:45 -- scripts/common.sh@352 -- # local d=1 00:07:38.530 14:21:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.530 14:21:45 -- scripts/common.sh@354 -- # echo 1 00:07:38.530 14:21:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:38.530 14:21:45 -- scripts/common.sh@365 -- # decimal 2 00:07:38.530 14:21:45 -- scripts/common.sh@352 -- # local d=2 00:07:38.530 14:21:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.530 14:21:45 -- scripts/common.sh@354 -- # echo 2 00:07:38.530 14:21:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:38.530 14:21:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:38.530 14:21:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:38.530 14:21:45 -- scripts/common.sh@367 -- # return 0 00:07:38.530 14:21:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.530 14:21:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:38.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.530 --rc genhtml_branch_coverage=1 00:07:38.530 --rc genhtml_function_coverage=1 00:07:38.530 --rc genhtml_legend=1 00:07:38.530 --rc geninfo_all_blocks=1 00:07:38.530 --rc geninfo_unexecuted_blocks=1 00:07:38.530 00:07:38.530 ' 00:07:38.530 14:21:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:38.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.530 --rc genhtml_branch_coverage=1 00:07:38.530 --rc genhtml_function_coverage=1 00:07:38.530 --rc genhtml_legend=1 00:07:38.530 --rc geninfo_all_blocks=1 00:07:38.530 --rc geninfo_unexecuted_blocks=1 00:07:38.530 00:07:38.530 ' 00:07:38.530 14:21:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:38.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.530 --rc genhtml_branch_coverage=1 00:07:38.530 --rc genhtml_function_coverage=1 00:07:38.530 --rc genhtml_legend=1 00:07:38.530 --rc geninfo_all_blocks=1 00:07:38.530 --rc geninfo_unexecuted_blocks=1 00:07:38.530 00:07:38.530 ' 00:07:38.530 14:21:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:38.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.530 --rc genhtml_branch_coverage=1 00:07:38.530 --rc genhtml_function_coverage=1 00:07:38.530 --rc genhtml_legend=1 00:07:38.530 --rc geninfo_all_blocks=1 00:07:38.530 --rc geninfo_unexecuted_blocks=1 00:07:38.530 00:07:38.530 ' 00:07:38.530 14:21:45 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.530 14:21:45 -- nvmf/common.sh@7 -- # uname -s 00:07:38.530 14:21:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.530 14:21:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.530 14:21:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.530 14:21:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.530 14:21:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.530 14:21:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.530 14:21:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.530 14:21:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.530 14:21:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.530 14:21:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.530 14:21:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:07:38.530 14:21:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:07:38.530 14:21:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.530 14:21:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.530 14:21:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.530 14:21:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.530 14:21:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.530 14:21:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.530 14:21:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.531 14:21:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.531 14:21:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.531 14:21:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.531 14:21:45 -- paths/export.sh@5 -- # export PATH 00:07:38.531 14:21:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.531 14:21:45 -- nvmf/common.sh@46 -- # : 0 00:07:38.531 14:21:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:38.531 14:21:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:38.531 14:21:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:38.531 14:21:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.531 14:21:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.531 14:21:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:38.531 14:21:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:38.531 14:21:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:38.531 14:21:45 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:38.531 14:21:45 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:07:38.531 14:21:45 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:38.531 14:21:45 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:38.531 14:21:45 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:38.531 14:21:45 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:38.531 14:21:45 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:38.531 14:21:45 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:38.531 14:21:45 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:38.531 14:21:45 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:38.531 14:21:45 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:38.531 INFO: JSON configuration test init 00:07:38.531 14:21:45 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:38.531 14:21:45 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:38.531 14:21:45 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:38.531 14:21:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.531 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.531 14:21:45 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:38.531 14:21:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.531 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.531 14:21:45 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:38.531 14:21:45 -- json_config/json_config.sh@98 -- # local app=target 00:07:38.531 14:21:45 -- json_config/json_config.sh@99 -- # shift 00:07:38.531 14:21:45 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:38.531 14:21:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:38.531 14:21:45 -- json_config/json_config.sh@111 -- # app_pid[$app]=56020 00:07:38.531 Waiting for target to run... 00:07:38.531 14:21:45 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:38.531 14:21:45 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:38.531 14:21:45 -- json_config/json_config.sh@114 -- # waitforlisten 56020 /var/tmp/spdk_tgt.sock 00:07:38.531 14:21:45 -- common/autotest_common.sh@829 -- # '[' -z 56020 ']' 00:07:38.531 14:21:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:38.531 14:21:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:38.531 14:21:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:38.531 14:21:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.531 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.531 [2024-12-06 14:21:45.417082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.531 [2024-12-06 14:21:45.417217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56020 ] 00:07:39.096 [2024-12-06 14:21:45.880726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.096 [2024-12-06 14:21:46.037345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.096 [2024-12-06 14:21:46.037620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.662 14:21:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.662 00:07:39.662 14:21:46 -- common/autotest_common.sh@862 -- # return 0 00:07:39.662 14:21:46 -- json_config/json_config.sh@115 -- # echo '' 00:07:39.662 14:21:46 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:39.662 14:21:46 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:39.662 14:21:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.662 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:07:39.662 14:21:46 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:39.662 14:21:46 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:39.662 14:21:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.662 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:07:39.662 14:21:46 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:39.662 14:21:46 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:39.662 14:21:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:40.229 14:21:47 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:40.229 14:21:47 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:40.229 14:21:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.229 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.229 14:21:47 -- json_config/json_config.sh@48 -- # local ret=0 00:07:40.229 14:21:47 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:40.229 14:21:47 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:40.229 14:21:47 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:40.229 14:21:47 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:40.229 14:21:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:40.486 14:21:47 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:40.486 14:21:47 -- json_config/json_config.sh@51 -- # local get_types 00:07:40.486 14:21:47 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:40.486 14:21:47 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:40.486 14:21:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.486 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.487 14:21:47 -- json_config/json_config.sh@58 -- # return 0 00:07:40.487 14:21:47 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:07:40.487 14:21:47 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:40.487 14:21:47 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:40.487 14:21:47 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:07:40.487 14:21:47 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:07:40.487 14:21:47 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:07:40.487 14:21:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.487 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.487 14:21:47 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:40.487 14:21:47 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:07:40.487 14:21:47 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:07:40.487 14:21:47 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:40.487 14:21:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:40.744 MallocForNvmf0 00:07:41.003 14:21:47 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:41.003 14:21:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:41.262 MallocForNvmf1 00:07:41.262 14:21:48 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:41.262 14:21:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:41.520 [2024-12-06 14:21:48.289120] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.520 14:21:48 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.520 14:21:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.778 14:21:48 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:41.778 14:21:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:42.035 14:21:48 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:42.035 14:21:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:42.600 14:21:49 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:42.600 14:21:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:42.858 [2024-12-06 14:21:49.617935] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:42.858 14:21:49 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:07:42.858 14:21:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.858 14:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:42.858 14:21:49 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:42.858 14:21:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.858 14:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:42.858 14:21:49 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:42.858 14:21:49 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:42.858 14:21:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:43.115 MallocBdevForConfigChangeCheck 00:07:43.115 14:21:50 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:43.115 14:21:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.115 14:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:43.372 14:21:50 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:43.372 14:21:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:43.681 INFO: shutting down applications... 00:07:43.681 14:21:50 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:43.681 14:21:50 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:43.681 14:21:50 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:43.681 14:21:50 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:43.681 14:21:50 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:44.267 Calling clear_iscsi_subsystem 00:07:44.267 Calling clear_nvmf_subsystem 00:07:44.267 Calling clear_nbd_subsystem 00:07:44.267 Calling clear_ublk_subsystem 00:07:44.267 Calling clear_vhost_blk_subsystem 00:07:44.267 Calling clear_vhost_scsi_subsystem 00:07:44.267 Calling clear_scheduler_subsystem 00:07:44.267 Calling clear_bdev_subsystem 00:07:44.267 Calling clear_accel_subsystem 00:07:44.267 Calling clear_vmd_subsystem 00:07:44.267 Calling clear_sock_subsystem 00:07:44.267 Calling clear_iobuf_subsystem 00:07:44.267 14:21:50 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:44.267 14:21:50 -- json_config/json_config.sh@396 -- # count=100 00:07:44.267 14:21:50 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:44.267 14:21:51 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:44.267 14:21:51 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:44.267 14:21:51 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:44.525 14:21:51 -- json_config/json_config.sh@398 -- # break 00:07:44.525 14:21:51 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:44.525 14:21:51 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:44.525 14:21:51 -- json_config/json_config.sh@120 -- # local app=target 00:07:44.525 14:21:51 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:44.525 14:21:51 -- json_config/json_config.sh@124 -- # [[ -n 56020 ]] 00:07:44.525 14:21:51 -- json_config/json_config.sh@127 -- # kill -SIGINT 56020 00:07:44.525 14:21:51 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:44.525 14:21:51 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:44.525 14:21:51 -- json_config/json_config.sh@130 -- # kill -0 56020 00:07:44.525 14:21:51 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:45.092 14:21:51 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:45.092 14:21:51 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:45.092 14:21:51 -- json_config/json_config.sh@130 -- # kill -0 56020 00:07:45.092 14:21:51 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:45.657 14:21:52 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:45.657 14:21:52 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:45.657 14:21:52 -- json_config/json_config.sh@130 -- # kill -0 56020 00:07:45.657 14:21:52 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:45.657 SPDK target shutdown done 00:07:45.657 14:21:52 -- json_config/json_config.sh@132 -- # break 00:07:45.657 14:21:52 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:45.657 14:21:52 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:45.657 INFO: relaunching applications... 00:07:45.657 14:21:52 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:45.657 14:21:52 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.657 14:21:52 -- json_config/json_config.sh@98 -- # local app=target 00:07:45.657 14:21:52 -- json_config/json_config.sh@99 -- # shift 00:07:45.657 14:21:52 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:45.657 14:21:52 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:45.657 14:21:52 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:45.657 14:21:52 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:45.657 14:21:52 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:45.657 14:21:52 -- json_config/json_config.sh@111 -- # app_pid[$app]=56307 00:07:45.657 Waiting for target to run... 00:07:45.657 14:21:52 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:45.657 14:21:52 -- json_config/json_config.sh@114 -- # waitforlisten 56307 /var/tmp/spdk_tgt.sock 00:07:45.657 14:21:52 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.657 14:21:52 -- common/autotest_common.sh@829 -- # '[' -z 56307 ']' 00:07:45.657 14:21:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:45.657 14:21:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:45.657 14:21:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:45.657 14:21:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.657 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:07:45.657 [2024-12-06 14:21:52.514981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.657 [2024-12-06 14:21:52.515112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56307 ] 00:07:46.589 [2024-12-06 14:21:53.321860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.589 [2024-12-06 14:21:53.450581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.589 [2024-12-06 14:21:53.450790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.845 [2024-12-06 14:21:53.777966] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.845 [2024-12-06 14:21:53.810187] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:47.409 14:21:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.409 00:07:47.409 14:21:54 -- common/autotest_common.sh@862 -- # return 0 00:07:47.409 14:21:54 -- json_config/json_config.sh@115 -- # echo '' 00:07:47.409 14:21:54 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:47.409 INFO: Checking if target configuration is the same... 00:07:47.409 14:21:54 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:47.409 14:21:54 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:47.409 14:21:54 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:47.409 14:21:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:47.409 + '[' 2 -ne 2 ']' 00:07:47.410 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:47.410 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:47.410 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:47.410 +++ basename /dev/fd/62 00:07:47.667 ++ mktemp /tmp/62.XXX 00:07:47.667 + tmp_file_1=/tmp/62.g8q 00:07:47.667 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:47.667 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:47.667 + tmp_file_2=/tmp/spdk_tgt_config.json.RS2 00:07:47.667 + ret=0 00:07:47.667 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:47.925 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:47.925 + diff -u /tmp/62.g8q /tmp/spdk_tgt_config.json.RS2 00:07:47.925 INFO: JSON config files are the same 00:07:47.925 + echo 'INFO: JSON config files are the same' 00:07:47.925 + rm /tmp/62.g8q /tmp/spdk_tgt_config.json.RS2 00:07:48.183 + exit 0 00:07:48.183 14:21:54 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:48.183 INFO: changing configuration and checking if this can be detected... 00:07:48.183 14:21:54 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:48.183 14:21:54 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:48.183 14:21:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:48.441 14:21:55 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:48.441 14:21:55 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:48.441 14:21:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:48.441 + '[' 2 -ne 2 ']' 00:07:48.441 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:48.441 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:48.441 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:48.441 +++ basename /dev/fd/62 00:07:48.441 ++ mktemp /tmp/62.XXX 00:07:48.441 + tmp_file_1=/tmp/62.xKV 00:07:48.441 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:48.441 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:48.441 + tmp_file_2=/tmp/spdk_tgt_config.json.v9H 00:07:48.441 + ret=0 00:07:48.441 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:49.006 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:49.006 + diff -u /tmp/62.xKV /tmp/spdk_tgt_config.json.v9H 00:07:49.006 + ret=1 00:07:49.006 + echo '=== Start of file: /tmp/62.xKV ===' 00:07:49.006 + cat /tmp/62.xKV 00:07:49.006 + echo '=== End of file: /tmp/62.xKV ===' 00:07:49.006 + echo '' 00:07:49.006 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v9H ===' 00:07:49.006 + cat /tmp/spdk_tgt_config.json.v9H 00:07:49.006 + echo '=== End of file: /tmp/spdk_tgt_config.json.v9H ===' 00:07:49.006 + echo '' 00:07:49.006 + rm /tmp/62.xKV /tmp/spdk_tgt_config.json.v9H 00:07:49.006 + exit 1 00:07:49.006 INFO: configuration change detected. 00:07:49.006 14:21:55 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:49.006 14:21:55 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:49.006 14:21:55 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:49.006 14:21:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.006 14:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 14:21:55 -- json_config/json_config.sh@360 -- # local ret=0 00:07:49.006 14:21:55 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:49.006 14:21:55 -- json_config/json_config.sh@370 -- # [[ -n 56307 ]] 00:07:49.006 14:21:55 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:49.006 14:21:55 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:49.006 14:21:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.006 14:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 14:21:55 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:07:49.006 14:21:55 -- json_config/json_config.sh@246 -- # uname -s 00:07:49.006 14:21:55 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:49.006 14:21:55 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:49.006 14:21:55 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:49.006 14:21:55 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:49.006 14:21:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.006 14:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 14:21:55 -- json_config/json_config.sh@376 -- # killprocess 56307 00:07:49.006 14:21:55 -- common/autotest_common.sh@936 -- # '[' -z 56307 ']' 00:07:49.006 14:21:55 -- common/autotest_common.sh@940 -- # kill -0 56307 00:07:49.006 14:21:55 -- common/autotest_common.sh@941 -- # uname 00:07:49.006 14:21:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.006 14:21:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56307 00:07:49.006 killing process with pid 56307 00:07:49.006 14:21:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.006 14:21:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.006 14:21:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56307' 00:07:49.006 14:21:55 -- common/autotest_common.sh@955 -- # kill 56307 00:07:49.006 14:21:55 -- common/autotest_common.sh@960 -- # wait 56307 00:07:49.941 14:21:56 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:49.941 14:21:56 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:49.941 14:21:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.941 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:07:49.941 14:21:56 -- json_config/json_config.sh@381 -- # return 0 00:07:49.941 INFO: Success 00:07:49.941 14:21:56 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:49.941 ************************************ 00:07:49.941 END TEST json_config 00:07:49.941 ************************************ 00:07:49.941 00:07:49.941 real 0m11.460s 00:07:49.941 user 0m15.872s 00:07:49.941 sys 0m2.577s 00:07:49.941 14:21:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.941 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:07:49.941 14:21:56 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:49.941 14:21:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.941 14:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.941 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:07:49.941 ************************************ 00:07:49.941 START TEST json_config_extra_key 00:07:49.941 ************************************ 00:07:49.941 14:21:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:49.941 14:21:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.941 14:21:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.941 14:21:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.941 14:21:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.941 14:21:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.941 14:21:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.941 14:21:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.941 14:21:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.941 14:21:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.941 14:21:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.941 14:21:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.941 14:21:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.941 14:21:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.941 14:21:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.941 14:21:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.941 14:21:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.941 14:21:56 -- scripts/common.sh@344 -- # : 1 00:07:49.941 14:21:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.941 14:21:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.941 14:21:56 -- scripts/common.sh@364 -- # decimal 1 00:07:49.941 14:21:56 -- scripts/common.sh@352 -- # local d=1 00:07:49.941 14:21:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.941 14:21:56 -- scripts/common.sh@354 -- # echo 1 00:07:49.941 14:21:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.941 14:21:56 -- scripts/common.sh@365 -- # decimal 2 00:07:49.941 14:21:56 -- scripts/common.sh@352 -- # local d=2 00:07:49.941 14:21:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.941 14:21:56 -- scripts/common.sh@354 -- # echo 2 00:07:49.941 14:21:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.941 14:21:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.941 14:21:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.941 14:21:56 -- scripts/common.sh@367 -- # return 0 00:07:49.941 14:21:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.942 14:21:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.942 --rc genhtml_branch_coverage=1 00:07:49.942 --rc genhtml_function_coverage=1 00:07:49.942 --rc genhtml_legend=1 00:07:49.942 --rc geninfo_all_blocks=1 00:07:49.942 --rc geninfo_unexecuted_blocks=1 00:07:49.942 00:07:49.942 ' 00:07:49.942 14:21:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.942 --rc genhtml_branch_coverage=1 00:07:49.942 --rc genhtml_function_coverage=1 00:07:49.942 --rc genhtml_legend=1 00:07:49.942 --rc geninfo_all_blocks=1 00:07:49.942 --rc geninfo_unexecuted_blocks=1 00:07:49.942 00:07:49.942 ' 00:07:49.942 14:21:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.942 --rc genhtml_branch_coverage=1 00:07:49.942 --rc genhtml_function_coverage=1 00:07:49.942 --rc genhtml_legend=1 00:07:49.942 --rc geninfo_all_blocks=1 00:07:49.942 --rc geninfo_unexecuted_blocks=1 00:07:49.942 00:07:49.942 ' 00:07:49.942 14:21:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.942 --rc genhtml_branch_coverage=1 00:07:49.942 --rc genhtml_function_coverage=1 00:07:49.942 --rc genhtml_legend=1 00:07:49.942 --rc geninfo_all_blocks=1 00:07:49.942 --rc geninfo_unexecuted_blocks=1 00:07:49.942 00:07:49.942 ' 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.942 14:21:56 -- nvmf/common.sh@7 -- # uname -s 00:07:49.942 14:21:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.942 14:21:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.942 14:21:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.942 14:21:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.942 14:21:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.942 14:21:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.942 14:21:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.942 14:21:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.942 14:21:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.942 14:21:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.942 14:21:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:07:49.942 14:21:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:07:49.942 14:21:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.942 14:21:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.942 14:21:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:49.942 14:21:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.942 14:21:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.942 14:21:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.942 14:21:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.942 14:21:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.942 14:21:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.942 14:21:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.942 14:21:56 -- paths/export.sh@5 -- # export PATH 00:07:49.942 14:21:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.942 14:21:56 -- nvmf/common.sh@46 -- # : 0 00:07:49.942 14:21:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.942 14:21:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.942 14:21:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.942 14:21:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.942 14:21:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.942 14:21:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.942 14:21:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.942 14:21:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:49.942 INFO: launching applications... 00:07:49.942 Waiting for target to run... 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56514 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56514 /var/tmp/spdk_tgt.sock 00:07:49.942 14:21:56 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:49.942 14:21:56 -- common/autotest_common.sh@829 -- # '[' -z 56514 ']' 00:07:49.942 14:21:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.942 14:21:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.942 14:21:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.942 14:21:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.942 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:07:49.943 [2024-12-06 14:21:56.903503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.943 [2024-12-06 14:21:56.903997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56514 ] 00:07:50.878 [2024-12-06 14:21:57.511280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.878 [2024-12-06 14:21:57.625743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.878 [2024-12-06 14:21:57.626219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.136 14:21:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.136 14:21:57 -- common/autotest_common.sh@862 -- # return 0 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:51.136 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:51.136 INFO: shutting down applications... 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56514 ]] 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56514 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56514 00:07:51.136 14:21:57 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:51.414 14:21:58 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:51.414 14:21:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:51.414 14:21:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56514 00:07:51.414 14:21:58 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56514 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:51.983 SPDK target shutdown done 00:07:51.983 Success 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:51.983 14:21:58 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:51.983 00:07:51.983 real 0m2.242s 00:07:51.983 user 0m1.730s 00:07:51.983 sys 0m0.645s 00:07:51.983 ************************************ 00:07:51.983 END TEST json_config_extra_key 00:07:51.983 ************************************ 00:07:51.983 14:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.983 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:07:51.983 14:21:58 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:51.983 14:21:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.983 14:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.983 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:07:51.983 ************************************ 00:07:51.983 START TEST alias_rpc 00:07:51.983 ************************************ 00:07:51.983 14:21:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:52.242 * Looking for test storage... 00:07:52.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:52.242 14:21:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:52.242 14:21:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:52.242 14:21:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:52.242 14:21:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:52.242 14:21:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:52.242 14:21:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:52.242 14:21:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:52.242 14:21:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:52.242 14:21:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:52.242 14:21:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.242 14:21:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:52.242 14:21:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:52.242 14:21:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:52.242 14:21:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:52.242 14:21:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:52.242 14:21:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:52.242 14:21:59 -- scripts/common.sh@344 -- # : 1 00:07:52.242 14:21:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:52.242 14:21:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.242 14:21:59 -- scripts/common.sh@364 -- # decimal 1 00:07:52.242 14:21:59 -- scripts/common.sh@352 -- # local d=1 00:07:52.242 14:21:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.242 14:21:59 -- scripts/common.sh@354 -- # echo 1 00:07:52.242 14:21:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:52.242 14:21:59 -- scripts/common.sh@365 -- # decimal 2 00:07:52.242 14:21:59 -- scripts/common.sh@352 -- # local d=2 00:07:52.242 14:21:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.242 14:21:59 -- scripts/common.sh@354 -- # echo 2 00:07:52.242 14:21:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:52.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.242 14:21:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:52.242 14:21:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:52.242 14:21:59 -- scripts/common.sh@367 -- # return 0 00:07:52.242 14:21:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.242 14:21:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:52.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.242 --rc genhtml_branch_coverage=1 00:07:52.242 --rc genhtml_function_coverage=1 00:07:52.242 --rc genhtml_legend=1 00:07:52.242 --rc geninfo_all_blocks=1 00:07:52.242 --rc geninfo_unexecuted_blocks=1 00:07:52.242 00:07:52.242 ' 00:07:52.242 14:21:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:52.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.242 --rc genhtml_branch_coverage=1 00:07:52.242 --rc genhtml_function_coverage=1 00:07:52.242 --rc genhtml_legend=1 00:07:52.242 --rc geninfo_all_blocks=1 00:07:52.242 --rc geninfo_unexecuted_blocks=1 00:07:52.242 00:07:52.242 ' 00:07:52.242 14:21:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:52.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.242 --rc genhtml_branch_coverage=1 00:07:52.242 --rc genhtml_function_coverage=1 00:07:52.242 --rc genhtml_legend=1 00:07:52.242 --rc geninfo_all_blocks=1 00:07:52.242 --rc geninfo_unexecuted_blocks=1 00:07:52.242 00:07:52.242 ' 00:07:52.242 14:21:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:52.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.242 --rc genhtml_branch_coverage=1 00:07:52.242 --rc genhtml_function_coverage=1 00:07:52.242 --rc genhtml_legend=1 00:07:52.242 --rc geninfo_all_blocks=1 00:07:52.242 --rc geninfo_unexecuted_blocks=1 00:07:52.242 00:07:52.242 ' 00:07:52.242 14:21:59 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:52.242 14:21:59 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56604 00:07:52.242 14:21:59 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56604 00:07:52.242 14:21:59 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:52.242 14:21:59 -- common/autotest_common.sh@829 -- # '[' -z 56604 ']' 00:07:52.243 14:21:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.243 14:21:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.243 14:21:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.243 14:21:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.243 14:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:52.501 [2024-12-06 14:21:59.216264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.501 [2024-12-06 14:21:59.216744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56604 ] 00:07:52.501 [2024-12-06 14:21:59.361331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.759 [2024-12-06 14:21:59.535275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:52.759 [2024-12-06 14:21:59.535494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.325 14:22:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.325 14:22:00 -- common/autotest_common.sh@862 -- # return 0 00:07:53.325 14:22:00 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:53.584 14:22:00 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56604 00:07:53.584 14:22:00 -- common/autotest_common.sh@936 -- # '[' -z 56604 ']' 00:07:53.584 14:22:00 -- common/autotest_common.sh@940 -- # kill -0 56604 00:07:53.584 14:22:00 -- common/autotest_common.sh@941 -- # uname 00:07:53.584 14:22:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.584 14:22:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56604 00:07:53.842 killing process with pid 56604 00:07:53.842 14:22:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.842 14:22:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.842 14:22:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56604' 00:07:53.842 14:22:00 -- common/autotest_common.sh@955 -- # kill 56604 00:07:53.842 14:22:00 -- common/autotest_common.sh@960 -- # wait 56604 00:07:54.409 ************************************ 00:07:54.409 END TEST alias_rpc 00:07:54.409 ************************************ 00:07:54.409 00:07:54.409 real 0m2.384s 00:07:54.409 user 0m2.507s 00:07:54.409 sys 0m0.654s 00:07:54.409 14:22:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.409 14:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.409 14:22:01 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:07:54.409 14:22:01 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:54.409 14:22:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.409 14:22:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.409 14:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.667 ************************************ 00:07:54.667 START TEST dpdk_mem_utility 00:07:54.667 ************************************ 00:07:54.667 14:22:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:54.667 * Looking for test storage... 00:07:54.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:54.667 14:22:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.667 14:22:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.667 14:22:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.667 14:22:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.667 14:22:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.667 14:22:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.667 14:22:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.667 14:22:01 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.667 14:22:01 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.667 14:22:01 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.667 14:22:01 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.667 14:22:01 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.667 14:22:01 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.667 14:22:01 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.667 14:22:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.667 14:22:01 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.667 14:22:01 -- scripts/common.sh@344 -- # : 1 00:07:54.667 14:22:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.667 14:22:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.667 14:22:01 -- scripts/common.sh@364 -- # decimal 1 00:07:54.667 14:22:01 -- scripts/common.sh@352 -- # local d=1 00:07:54.667 14:22:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.667 14:22:01 -- scripts/common.sh@354 -- # echo 1 00:07:54.667 14:22:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.667 14:22:01 -- scripts/common.sh@365 -- # decimal 2 00:07:54.667 14:22:01 -- scripts/common.sh@352 -- # local d=2 00:07:54.667 14:22:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.667 14:22:01 -- scripts/common.sh@354 -- # echo 2 00:07:54.667 14:22:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.667 14:22:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.667 14:22:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.667 14:22:01 -- scripts/common.sh@367 -- # return 0 00:07:54.667 14:22:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.667 14:22:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.667 --rc genhtml_branch_coverage=1 00:07:54.667 --rc genhtml_function_coverage=1 00:07:54.667 --rc genhtml_legend=1 00:07:54.667 --rc geninfo_all_blocks=1 00:07:54.667 --rc geninfo_unexecuted_blocks=1 00:07:54.667 00:07:54.667 ' 00:07:54.667 14:22:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.667 --rc genhtml_branch_coverage=1 00:07:54.667 --rc genhtml_function_coverage=1 00:07:54.667 --rc genhtml_legend=1 00:07:54.667 --rc geninfo_all_blocks=1 00:07:54.667 --rc geninfo_unexecuted_blocks=1 00:07:54.667 00:07:54.667 ' 00:07:54.667 14:22:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.667 --rc genhtml_branch_coverage=1 00:07:54.667 --rc genhtml_function_coverage=1 00:07:54.667 --rc genhtml_legend=1 00:07:54.667 --rc geninfo_all_blocks=1 00:07:54.667 --rc geninfo_unexecuted_blocks=1 00:07:54.667 00:07:54.667 ' 00:07:54.667 14:22:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.667 --rc genhtml_branch_coverage=1 00:07:54.667 --rc genhtml_function_coverage=1 00:07:54.667 --rc genhtml_legend=1 00:07:54.667 --rc geninfo_all_blocks=1 00:07:54.667 --rc geninfo_unexecuted_blocks=1 00:07:54.667 00:07:54.668 ' 00:07:54.668 14:22:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:54.668 14:22:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56709 00:07:54.668 14:22:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:54.668 14:22:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56709 00:07:54.668 14:22:01 -- common/autotest_common.sh@829 -- # '[' -z 56709 ']' 00:07:54.668 14:22:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.668 14:22:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.668 14:22:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.668 14:22:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.668 14:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.925 [2024-12-06 14:22:01.640888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.925 [2024-12-06 14:22:01.641298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56709 ] 00:07:54.925 [2024-12-06 14:22:01.780427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.183 [2024-12-06 14:22:01.957913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.183 [2024-12-06 14:22:01.958487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.750 14:22:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.750 14:22:02 -- common/autotest_common.sh@862 -- # return 0 00:07:55.750 14:22:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:55.750 14:22:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:55.750 14:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.750 14:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:55.750 { 00:07:55.750 "filename": "/tmp/spdk_mem_dump.txt" 00:07:55.750 } 00:07:55.750 14:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.750 14:22:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:56.009 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:56.009 1 heaps totaling size 814.000000 MiB 00:07:56.009 size: 814.000000 MiB heap id: 0 00:07:56.009 end heaps---------- 00:07:56.009 8 mempools totaling size 598.116089 MiB 00:07:56.009 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:56.009 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:56.009 size: 84.521057 MiB name: bdev_io_56709 00:07:56.009 size: 51.011292 MiB name: evtpool_56709 00:07:56.009 size: 50.003479 MiB name: msgpool_56709 00:07:56.009 size: 21.763794 MiB name: PDU_Pool 00:07:56.009 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:56.009 size: 0.026123 MiB name: Session_Pool 00:07:56.009 end mempools------- 00:07:56.009 6 memzones totaling size 4.142822 MiB 00:07:56.009 size: 1.000366 MiB name: RG_ring_0_56709 00:07:56.009 size: 1.000366 MiB name: RG_ring_1_56709 00:07:56.009 size: 1.000366 MiB name: RG_ring_4_56709 00:07:56.009 size: 1.000366 MiB name: RG_ring_5_56709 00:07:56.009 size: 0.125366 MiB name: RG_ring_2_56709 00:07:56.009 size: 0.015991 MiB name: RG_ring_3_56709 00:07:56.009 end memzones------- 00:07:56.009 14:22:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:56.009 heap id: 0 total size: 814.000000 MiB number of busy elements: 231 number of free elements: 15 00:07:56.009 list of free elements. size: 12.484558 MiB 00:07:56.009 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:56.009 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:56.009 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:56.009 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:56.009 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:56.009 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:56.009 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:56.009 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:56.009 element at address: 0x200000200000 with size: 0.837219 MiB 00:07:56.009 element at address: 0x20001aa00000 with size: 0.570984 MiB 00:07:56.009 element at address: 0x20000b200000 with size: 0.489258 MiB 00:07:56.009 element at address: 0x200000800000 with size: 0.486877 MiB 00:07:56.009 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:56.009 element at address: 0x200027e00000 with size: 0.397949 MiB 00:07:56.009 element at address: 0x200003a00000 with size: 0.351501 MiB 00:07:56.009 list of standard malloc elements. size: 199.252869 MiB 00:07:56.009 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:56.009 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:56.009 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:56.009 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:56.009 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:56.009 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:56.009 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:56.009 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:56.009 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:56.009 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:07:56.009 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:56.010 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:56.011 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:56.011 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:56.011 list of memzone associated elements. size: 602.262573 MiB 00:07:56.011 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:56.011 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:56.011 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:56.011 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:56.011 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:56.011 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56709_0 00:07:56.011 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:56.011 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56709_0 00:07:56.011 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:56.011 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56709_0 00:07:56.011 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:56.011 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:56.011 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:56.011 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:56.011 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:56.011 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56709 00:07:56.011 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:56.011 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56709 00:07:56.011 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:56.011 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56709 00:07:56.011 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:56.011 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:56.011 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:56.011 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:56.011 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:56.011 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:56.011 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:56.011 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:56.011 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:56.011 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56709 00:07:56.012 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:56.012 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56709 00:07:56.012 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:56.012 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56709 00:07:56.012 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:56.012 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56709 00:07:56.012 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:56.012 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56709 00:07:56.012 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:56.012 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:56.012 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:56.012 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:56.012 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:56.012 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:56.012 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:56.012 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56709 00:07:56.012 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:56.012 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:56.012 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:07:56.012 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:56.012 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:56.012 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56709 00:07:56.012 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:07:56.012 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:56.012 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:07:56.012 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56709 00:07:56.012 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:56.012 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56709 00:07:56.012 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:07:56.012 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:56.012 14:22:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:56.012 14:22:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56709 00:07:56.012 14:22:02 -- common/autotest_common.sh@936 -- # '[' -z 56709 ']' 00:07:56.012 14:22:02 -- common/autotest_common.sh@940 -- # kill -0 56709 00:07:56.012 14:22:02 -- common/autotest_common.sh@941 -- # uname 00:07:56.012 14:22:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:56.012 14:22:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56709 00:07:56.012 killing process with pid 56709 00:07:56.012 14:22:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:56.012 14:22:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:56.012 14:22:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56709' 00:07:56.012 14:22:02 -- common/autotest_common.sh@955 -- # kill 56709 00:07:56.012 14:22:02 -- common/autotest_common.sh@960 -- # wait 56709 00:07:57.386 ************************************ 00:07:57.386 END TEST dpdk_mem_utility 00:07:57.386 ************************************ 00:07:57.386 00:07:57.386 real 0m2.857s 00:07:57.386 user 0m2.759s 00:07:57.386 sys 0m0.802s 00:07:57.386 14:22:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.386 14:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 14:22:04 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:57.386 14:22:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.386 14:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.386 14:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 ************************************ 00:07:57.386 START TEST event 00:07:57.386 ************************************ 00:07:57.386 14:22:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:57.644 * Looking for test storage... 00:07:57.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:57.644 14:22:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:57.644 14:22:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:57.644 14:22:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:57.644 14:22:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:57.644 14:22:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:57.644 14:22:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:57.644 14:22:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:57.644 14:22:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:57.644 14:22:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:57.644 14:22:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.644 14:22:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:57.644 14:22:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:57.644 14:22:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:57.644 14:22:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:57.644 14:22:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:57.644 14:22:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:57.644 14:22:04 -- scripts/common.sh@344 -- # : 1 00:07:57.644 14:22:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:57.644 14:22:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.644 14:22:04 -- scripts/common.sh@364 -- # decimal 1 00:07:57.644 14:22:04 -- scripts/common.sh@352 -- # local d=1 00:07:57.644 14:22:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.644 14:22:04 -- scripts/common.sh@354 -- # echo 1 00:07:57.644 14:22:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:57.644 14:22:04 -- scripts/common.sh@365 -- # decimal 2 00:07:57.644 14:22:04 -- scripts/common.sh@352 -- # local d=2 00:07:57.644 14:22:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.644 14:22:04 -- scripts/common.sh@354 -- # echo 2 00:07:57.644 14:22:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:57.644 14:22:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:57.644 14:22:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:57.644 14:22:04 -- scripts/common.sh@367 -- # return 0 00:07:57.644 14:22:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.644 14:22:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:57.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.644 --rc genhtml_branch_coverage=1 00:07:57.644 --rc genhtml_function_coverage=1 00:07:57.644 --rc genhtml_legend=1 00:07:57.644 --rc geninfo_all_blocks=1 00:07:57.644 --rc geninfo_unexecuted_blocks=1 00:07:57.644 00:07:57.644 ' 00:07:57.644 14:22:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:57.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.644 --rc genhtml_branch_coverage=1 00:07:57.644 --rc genhtml_function_coverage=1 00:07:57.644 --rc genhtml_legend=1 00:07:57.644 --rc geninfo_all_blocks=1 00:07:57.644 --rc geninfo_unexecuted_blocks=1 00:07:57.644 00:07:57.644 ' 00:07:57.644 14:22:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:57.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.644 --rc genhtml_branch_coverage=1 00:07:57.644 --rc genhtml_function_coverage=1 00:07:57.644 --rc genhtml_legend=1 00:07:57.644 --rc geninfo_all_blocks=1 00:07:57.644 --rc geninfo_unexecuted_blocks=1 00:07:57.644 00:07:57.644 ' 00:07:57.644 14:22:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:57.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.644 --rc genhtml_branch_coverage=1 00:07:57.644 --rc genhtml_function_coverage=1 00:07:57.644 --rc genhtml_legend=1 00:07:57.644 --rc geninfo_all_blocks=1 00:07:57.644 --rc geninfo_unexecuted_blocks=1 00:07:57.644 00:07:57.644 ' 00:07:57.644 14:22:04 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:57.644 14:22:04 -- bdev/nbd_common.sh@6 -- # set -e 00:07:57.644 14:22:04 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:57.644 14:22:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:57.644 14:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.644 14:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 ************************************ 00:07:57.644 START TEST event_perf 00:07:57.644 ************************************ 00:07:57.644 14:22:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:57.644 Running I/O for 1 seconds...[2024-12-06 14:22:04.566222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.644 [2024-12-06 14:22:04.566557] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56822 ] 00:07:57.902 [2024-12-06 14:22:04.705656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.160 [2024-12-06 14:22:04.969108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.160 [2024-12-06 14:22:04.969186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.160 [2024-12-06 14:22:04.969291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.160 [2024-12-06 14:22:04.969307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.532 Running I/O for 1 seconds... 00:07:59.532 lcore 0: 86725 00:07:59.532 lcore 1: 86728 00:07:59.532 lcore 2: 86732 00:07:59.532 lcore 3: 86735 00:07:59.532 done. 00:07:59.532 00:07:59.532 real 0m1.737s 00:07:59.532 user 0m4.444s 00:07:59.532 sys 0m0.144s 00:07:59.532 14:22:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.532 ************************************ 00:07:59.532 END TEST event_perf 00:07:59.532 ************************************ 00:07:59.532 14:22:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.532 14:22:06 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:59.532 14:22:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:59.532 14:22:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.532 14:22:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.532 ************************************ 00:07:59.532 START TEST event_reactor 00:07:59.532 ************************************ 00:07:59.532 14:22:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:59.532 [2024-12-06 14:22:06.359167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.532 [2024-12-06 14:22:06.359296] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56860 ] 00:07:59.532 [2024-12-06 14:22:06.496850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.097 [2024-12-06 14:22:06.840711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.467 test_start 00:08:01.467 oneshot 00:08:01.467 tick 100 00:08:01.467 tick 100 00:08:01.467 tick 250 00:08:01.467 tick 100 00:08:01.467 tick 100 00:08:01.467 tick 250 00:08:01.467 tick 500 00:08:01.467 tick 100 00:08:01.467 tick 100 00:08:01.467 tick 100 00:08:01.467 tick 250 00:08:01.467 tick 100 00:08:01.467 tick 100 00:08:01.467 test_end 00:08:01.467 00:08:01.467 real 0m1.810s 00:08:01.467 user 0m1.528s 00:08:01.467 sys 0m0.165s 00:08:01.467 ************************************ 00:08:01.467 END TEST event_reactor 00:08:01.467 ************************************ 00:08:01.467 14:22:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.467 14:22:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.467 14:22:08 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:01.467 14:22:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:01.467 14:22:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.467 14:22:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.467 ************************************ 00:08:01.467 START TEST event_reactor_perf 00:08:01.467 ************************************ 00:08:01.467 14:22:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:01.467 [2024-12-06 14:22:08.223645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.467 [2024-12-06 14:22:08.223824] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56901 ] 00:08:01.467 [2024-12-06 14:22:08.369191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.725 [2024-12-06 14:22:08.674283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.098 test_start 00:08:03.098 test_end 00:08:03.098 Performance: 365138 events per second 00:08:03.098 ************************************ 00:08:03.098 END TEST event_reactor_perf 00:08:03.098 ************************************ 00:08:03.098 00:08:03.098 real 0m1.837s 00:08:03.098 user 0m1.552s 00:08:03.098 sys 0m0.166s 00:08:03.098 14:22:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.098 14:22:10 -- common/autotest_common.sh@10 -- # set +x 00:08:03.356 14:22:10 -- event/event.sh@49 -- # uname -s 00:08:03.356 14:22:10 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:03.356 14:22:10 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:03.356 14:22:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.356 14:22:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.356 14:22:10 -- common/autotest_common.sh@10 -- # set +x 00:08:03.356 ************************************ 00:08:03.356 START TEST event_scheduler 00:08:03.356 ************************************ 00:08:03.356 14:22:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:03.356 * Looking for test storage... 00:08:03.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:03.356 14:22:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.356 14:22:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.356 14:22:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.356 14:22:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.356 14:22:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.356 14:22:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.356 14:22:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.356 14:22:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.356 14:22:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.356 14:22:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.356 14:22:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.356 14:22:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.356 14:22:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.356 14:22:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.356 14:22:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.356 14:22:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.356 14:22:10 -- scripts/common.sh@344 -- # : 1 00:08:03.356 14:22:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.356 14:22:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.356 14:22:10 -- scripts/common.sh@364 -- # decimal 1 00:08:03.356 14:22:10 -- scripts/common.sh@352 -- # local d=1 00:08:03.356 14:22:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.356 14:22:10 -- scripts/common.sh@354 -- # echo 1 00:08:03.356 14:22:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.356 14:22:10 -- scripts/common.sh@365 -- # decimal 2 00:08:03.356 14:22:10 -- scripts/common.sh@352 -- # local d=2 00:08:03.356 14:22:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.356 14:22:10 -- scripts/common.sh@354 -- # echo 2 00:08:03.356 14:22:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.356 14:22:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.356 14:22:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.356 14:22:10 -- scripts/common.sh@367 -- # return 0 00:08:03.356 14:22:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.357 14:22:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.357 --rc genhtml_branch_coverage=1 00:08:03.357 --rc genhtml_function_coverage=1 00:08:03.357 --rc genhtml_legend=1 00:08:03.357 --rc geninfo_all_blocks=1 00:08:03.357 --rc geninfo_unexecuted_blocks=1 00:08:03.357 00:08:03.357 ' 00:08:03.357 14:22:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.357 --rc genhtml_branch_coverage=1 00:08:03.357 --rc genhtml_function_coverage=1 00:08:03.357 --rc genhtml_legend=1 00:08:03.357 --rc geninfo_all_blocks=1 00:08:03.357 --rc geninfo_unexecuted_blocks=1 00:08:03.357 00:08:03.357 ' 00:08:03.357 14:22:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.357 --rc genhtml_branch_coverage=1 00:08:03.357 --rc genhtml_function_coverage=1 00:08:03.357 --rc genhtml_legend=1 00:08:03.357 --rc geninfo_all_blocks=1 00:08:03.357 --rc geninfo_unexecuted_blocks=1 00:08:03.357 00:08:03.357 ' 00:08:03.357 14:22:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.357 --rc genhtml_branch_coverage=1 00:08:03.357 --rc genhtml_function_coverage=1 00:08:03.357 --rc genhtml_legend=1 00:08:03.357 --rc geninfo_all_blocks=1 00:08:03.357 --rc geninfo_unexecuted_blocks=1 00:08:03.357 00:08:03.357 ' 00:08:03.357 14:22:10 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:03.357 14:22:10 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56975 00:08:03.357 14:22:10 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:03.357 14:22:10 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.357 14:22:10 -- scheduler/scheduler.sh@37 -- # waitforlisten 56975 00:08:03.357 14:22:10 -- common/autotest_common.sh@829 -- # '[' -z 56975 ']' 00:08:03.357 14:22:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.357 14:22:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.357 14:22:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.357 14:22:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.357 14:22:10 -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 [2024-12-06 14:22:10.360536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.616 [2024-12-06 14:22:10.361031] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56975 ] 00:08:03.616 [2024-12-06 14:22:10.501593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.874 [2024-12-06 14:22:10.698164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.874 [2024-12-06 14:22:10.698274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.874 [2024-12-06 14:22:10.698389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.874 [2024-12-06 14:22:10.698392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.810 14:22:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.810 14:22:11 -- common/autotest_common.sh@862 -- # return 0 00:08:04.810 14:22:11 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:04.810 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.810 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.810 POWER: Env isn't set yet! 00:08:04.810 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:04.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.810 POWER: Cannot set governor of lcore 0 to userspace 00:08:04.810 POWER: Attempting to initialise PSTAT power management... 00:08:04.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.810 POWER: Cannot set governor of lcore 0 to performance 00:08:04.810 POWER: Attempting to initialise AMD PSTATE power management... 00:08:04.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.810 POWER: Cannot set governor of lcore 0 to userspace 00:08:04.810 POWER: Attempting to initialise CPPC power management... 00:08:04.810 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:04.810 POWER: Cannot set governor of lcore 0 to userspace 00:08:04.810 POWER: Attempting to initialise VM power management... 00:08:04.810 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:04.810 POWER: Unable to set Power Management Environment for lcore 0 00:08:04.810 [2024-12-06 14:22:11.444194] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:04.810 [2024-12-06 14:22:11.444219] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:04.810 [2024-12-06 14:22:11.444231] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:04.810 [2024-12-06 14:22:11.444247] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:04.810 [2024-12-06 14:22:11.444255] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:04.810 [2024-12-06 14:22:11.444263] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:04.810 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.810 14:22:11 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 [2024-12-06 14:22:11.647208] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:04.811 14:22:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.811 14:22:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 ************************************ 00:08:04.811 START TEST scheduler_create_thread 00:08:04.811 ************************************ 00:08:04.811 14:22:11 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 2 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 3 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 4 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 5 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 6 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 7 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 8 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 9 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 10 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.811 14:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.811 14:22:11 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:04.811 14:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.811 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:08:06.709 14:22:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.709 14:22:13 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:06.709 14:22:13 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:06.709 14:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.709 14:22:13 -- common/autotest_common.sh@10 -- # set +x 00:08:07.644 ************************************ 00:08:07.644 END TEST scheduler_create_thread 00:08:07.644 ************************************ 00:08:07.644 14:22:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.644 00:08:07.644 real 0m2.618s 00:08:07.644 user 0m0.022s 00:08:07.644 sys 0m0.005s 00:08:07.644 14:22:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.644 14:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:07.644 14:22:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:07.644 14:22:14 -- scheduler/scheduler.sh@46 -- # killprocess 56975 00:08:07.644 14:22:14 -- common/autotest_common.sh@936 -- # '[' -z 56975 ']' 00:08:07.644 14:22:14 -- common/autotest_common.sh@940 -- # kill -0 56975 00:08:07.644 14:22:14 -- common/autotest_common.sh@941 -- # uname 00:08:07.644 14:22:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:07.644 14:22:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56975 00:08:07.644 killing process with pid 56975 00:08:07.644 14:22:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:08:07.644 14:22:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:08:07.644 14:22:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56975' 00:08:07.644 14:22:14 -- common/autotest_common.sh@955 -- # kill 56975 00:08:07.644 14:22:14 -- common/autotest_common.sh@960 -- # wait 56975 00:08:07.900 [2024-12-06 14:22:14.758276] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:08.837 00:08:08.837 real 0m5.565s 00:08:08.837 user 0m9.892s 00:08:08.837 sys 0m0.630s 00:08:08.837 ************************************ 00:08:08.837 END TEST event_scheduler 00:08:08.837 ************************************ 00:08:08.837 14:22:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.837 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:08.837 14:22:15 -- event/event.sh@51 -- # modprobe -n nbd 00:08:08.837 14:22:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:08.837 14:22:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.837 14:22:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.837 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:08.837 ************************************ 00:08:08.837 START TEST app_repeat 00:08:08.837 ************************************ 00:08:08.837 14:22:15 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:08:08.837 14:22:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.837 14:22:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.837 14:22:15 -- event/event.sh@13 -- # local nbd_list 00:08:08.837 14:22:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.837 14:22:15 -- event/event.sh@14 -- # local bdev_list 00:08:08.837 14:22:15 -- event/event.sh@15 -- # local repeat_times=4 00:08:08.837 14:22:15 -- event/event.sh@17 -- # modprobe nbd 00:08:08.837 14:22:15 -- event/event.sh@19 -- # repeat_pid=57104 00:08:08.837 14:22:15 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:08.837 14:22:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.837 Process app_repeat pid: 57104 00:08:08.837 spdk_app_start Round 0 00:08:08.837 14:22:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57104' 00:08:08.837 14:22:15 -- event/event.sh@23 -- # for i in {0..2} 00:08:08.837 14:22:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:08.837 14:22:15 -- event/event.sh@25 -- # waitforlisten 57104 /var/tmp/spdk-nbd.sock 00:08:08.837 14:22:15 -- common/autotest_common.sh@829 -- # '[' -z 57104 ']' 00:08:08.837 14:22:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.837 14:22:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.837 14:22:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.837 14:22:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.837 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:08.837 [2024-12-06 14:22:15.765873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.837 [2024-12-06 14:22:15.766295] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57104 ] 00:08:09.095 [2024-12-06 14:22:15.901137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.353 [2024-12-06 14:22:16.188961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.353 [2024-12-06 14:22:16.188972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.285 14:22:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.285 14:22:16 -- common/autotest_common.sh@862 -- # return 0 00:08:10.285 14:22:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:10.543 Malloc0 00:08:10.543 14:22:17 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:10.802 Malloc1 00:08:11.059 14:22:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@12 -- # local i 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.059 14:22:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:11.317 /dev/nbd0 00:08:11.317 14:22:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:11.317 14:22:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:11.317 14:22:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:11.317 14:22:18 -- common/autotest_common.sh@867 -- # local i 00:08:11.317 14:22:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.317 14:22:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.317 14:22:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:11.317 14:22:18 -- common/autotest_common.sh@871 -- # break 00:08:11.317 14:22:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.317 14:22:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.317 14:22:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.317 1+0 records in 00:08:11.317 1+0 records out 00:08:11.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392362 s, 10.4 MB/s 00:08:11.317 14:22:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.317 14:22:18 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.317 14:22:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.317 14:22:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.317 14:22:18 -- common/autotest_common.sh@887 -- # return 0 00:08:11.317 14:22:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.317 14:22:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.317 14:22:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:11.575 /dev/nbd1 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:11.575 14:22:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:11.575 14:22:18 -- common/autotest_common.sh@867 -- # local i 00:08:11.575 14:22:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.575 14:22:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.575 14:22:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:11.575 14:22:18 -- common/autotest_common.sh@871 -- # break 00:08:11.575 14:22:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.575 14:22:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.575 14:22:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.575 1+0 records in 00:08:11.575 1+0 records out 00:08:11.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458752 s, 8.9 MB/s 00:08:11.575 14:22:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.575 14:22:18 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.575 14:22:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.575 14:22:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.575 14:22:18 -- common/autotest_common.sh@887 -- # return 0 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.575 14:22:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.832 14:22:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:11.832 { 00:08:11.832 "bdev_name": "Malloc0", 00:08:11.832 "nbd_device": "/dev/nbd0" 00:08:11.832 }, 00:08:11.832 { 00:08:11.832 "bdev_name": "Malloc1", 00:08:11.832 "nbd_device": "/dev/nbd1" 00:08:11.832 } 00:08:11.832 ]' 00:08:11.832 14:22:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:11.832 { 00:08:11.832 "bdev_name": "Malloc0", 00:08:11.832 "nbd_device": "/dev/nbd0" 00:08:11.832 }, 00:08:11.832 { 00:08:11.832 "bdev_name": "Malloc1", 00:08:11.832 "nbd_device": "/dev/nbd1" 00:08:11.832 } 00:08:11.832 ]' 00:08:11.832 14:22:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:12.090 /dev/nbd1' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:12.090 /dev/nbd1' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@65 -- # count=2 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@95 -- # count=2 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:12.090 256+0 records in 00:08:12.090 256+0 records out 00:08:12.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653127 s, 161 MB/s 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:12.090 256+0 records in 00:08:12.090 256+0 records out 00:08:12.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271877 s, 38.6 MB/s 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:12.090 256+0 records in 00:08:12.090 256+0 records out 00:08:12.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282935 s, 37.1 MB/s 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@51 -- # local i 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.090 14:22:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@41 -- # break 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.347 14:22:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@41 -- # break 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.605 14:22:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@65 -- # true 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.170 14:22:19 -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.170 14:22:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:13.428 14:22:20 -- event/event.sh@35 -- # sleep 3 00:08:13.993 [2024-12-06 14:22:20.899024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.251 [2024-12-06 14:22:21.094710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.251 [2024-12-06 14:22:21.094730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.509 [2024-12-06 14:22:21.221609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:14.509 [2024-12-06 14:22:21.221708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:16.490 14:22:23 -- event/event.sh@23 -- # for i in {0..2} 00:08:16.490 spdk_app_start Round 1 00:08:16.490 14:22:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:16.490 14:22:23 -- event/event.sh@25 -- # waitforlisten 57104 /var/tmp/spdk-nbd.sock 00:08:16.490 14:22:23 -- common/autotest_common.sh@829 -- # '[' -z 57104 ']' 00:08:16.490 14:22:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.490 14:22:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.490 14:22:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.490 14:22:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.490 14:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:17.055 14:22:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.055 14:22:23 -- common/autotest_common.sh@862 -- # return 0 00:08:17.055 14:22:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.312 Malloc0 00:08:17.312 14:22:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.569 Malloc1 00:08:17.569 14:22:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@12 -- # local i 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.569 14:22:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:18.135 /dev/nbd0 00:08:18.135 14:22:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.135 14:22:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.135 14:22:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:18.135 14:22:24 -- common/autotest_common.sh@867 -- # local i 00:08:18.135 14:22:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:18.135 14:22:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:18.135 14:22:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:18.135 14:22:24 -- common/autotest_common.sh@871 -- # break 00:08:18.135 14:22:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:18.135 14:22:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:18.135 14:22:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.135 1+0 records in 00:08:18.135 1+0 records out 00:08:18.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471987 s, 8.7 MB/s 00:08:18.135 14:22:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.135 14:22:24 -- common/autotest_common.sh@884 -- # size=4096 00:08:18.135 14:22:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.135 14:22:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:18.135 14:22:24 -- common/autotest_common.sh@887 -- # return 0 00:08:18.135 14:22:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.135 14:22:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.135 14:22:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:18.393 /dev/nbd1 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:18.393 14:22:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:18.393 14:22:25 -- common/autotest_common.sh@867 -- # local i 00:08:18.393 14:22:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:18.393 14:22:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:18.393 14:22:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:18.393 14:22:25 -- common/autotest_common.sh@871 -- # break 00:08:18.393 14:22:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:18.393 14:22:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:18.393 14:22:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.393 1+0 records in 00:08:18.393 1+0 records out 00:08:18.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057426 s, 7.1 MB/s 00:08:18.393 14:22:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.393 14:22:25 -- common/autotest_common.sh@884 -- # size=4096 00:08:18.393 14:22:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.393 14:22:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:18.393 14:22:25 -- common/autotest_common.sh@887 -- # return 0 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.393 14:22:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.664 14:22:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:18.664 { 00:08:18.664 "bdev_name": "Malloc0", 00:08:18.664 "nbd_device": "/dev/nbd0" 00:08:18.664 }, 00:08:18.664 { 00:08:18.664 "bdev_name": "Malloc1", 00:08:18.664 "nbd_device": "/dev/nbd1" 00:08:18.664 } 00:08:18.664 ]' 00:08:18.664 14:22:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:18.664 { 00:08:18.664 "bdev_name": "Malloc0", 00:08:18.664 "nbd_device": "/dev/nbd0" 00:08:18.664 }, 00:08:18.664 { 00:08:18.664 "bdev_name": "Malloc1", 00:08:18.664 "nbd_device": "/dev/nbd1" 00:08:18.664 } 00:08:18.664 ]' 00:08:18.664 14:22:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:18.922 /dev/nbd1' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:18.922 /dev/nbd1' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@65 -- # count=2 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@95 -- # count=2 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:18.922 256+0 records in 00:08:18.922 256+0 records out 00:08:18.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685299 s, 153 MB/s 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:18.922 256+0 records in 00:08:18.922 256+0 records out 00:08:18.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276389 s, 37.9 MB/s 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:18.922 256+0 records in 00:08:18.922 256+0 records out 00:08:18.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286117 s, 36.6 MB/s 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@51 -- # local i 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:18.922 14:22:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@41 -- # break 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.187 14:22:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@41 -- # break 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.461 14:22:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.719 14:22:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:19.719 14:22:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:19.719 14:22:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@65 -- # true 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@65 -- # count=0 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@104 -- # count=0 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:19.977 14:22:26 -- bdev/nbd_common.sh@109 -- # return 0 00:08:19.977 14:22:26 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:20.544 14:22:27 -- event/event.sh@35 -- # sleep 3 00:08:21.110 [2024-12-06 14:22:27.819215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.110 [2024-12-06 14:22:28.023250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.110 [2024-12-06 14:22:28.023268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.368 [2024-12-06 14:22:28.096094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:21.368 [2024-12-06 14:22:28.096175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:23.268 spdk_app_start Round 2 00:08:23.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.268 14:22:30 -- event/event.sh@23 -- # for i in {0..2} 00:08:23.268 14:22:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:23.268 14:22:30 -- event/event.sh@25 -- # waitforlisten 57104 /var/tmp/spdk-nbd.sock 00:08:23.268 14:22:30 -- common/autotest_common.sh@829 -- # '[' -z 57104 ']' 00:08:23.268 14:22:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.268 14:22:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.268 14:22:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.268 14:22:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.268 14:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 14:22:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.894 14:22:30 -- common/autotest_common.sh@862 -- # return 0 00:08:23.894 14:22:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:23.894 Malloc0 00:08:23.894 14:22:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.152 Malloc1 00:08:24.152 14:22:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@12 -- # local i 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.152 14:22:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.717 /dev/nbd0 00:08:24.717 14:22:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.717 14:22:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.717 14:22:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:24.717 14:22:31 -- common/autotest_common.sh@867 -- # local i 00:08:24.717 14:22:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.717 14:22:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.717 14:22:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:24.717 14:22:31 -- common/autotest_common.sh@871 -- # break 00:08:24.717 14:22:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.717 14:22:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.717 14:22:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.717 1+0 records in 00:08:24.717 1+0 records out 00:08:24.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267138 s, 15.3 MB/s 00:08:24.717 14:22:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.717 14:22:31 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.717 14:22:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.717 14:22:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.717 14:22:31 -- common/autotest_common.sh@887 -- # return 0 00:08:24.717 14:22:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.717 14:22:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.717 14:22:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:24.975 /dev/nbd1 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:24.975 14:22:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:24.975 14:22:31 -- common/autotest_common.sh@867 -- # local i 00:08:24.975 14:22:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.975 14:22:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.975 14:22:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:24.975 14:22:31 -- common/autotest_common.sh@871 -- # break 00:08:24.975 14:22:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.975 14:22:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.975 14:22:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.975 1+0 records in 00:08:24.975 1+0 records out 00:08:24.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257694 s, 15.9 MB/s 00:08:24.975 14:22:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.975 14:22:31 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.975 14:22:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.975 14:22:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.975 14:22:31 -- common/autotest_common.sh@887 -- # return 0 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.975 14:22:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.274 { 00:08:25.274 "bdev_name": "Malloc0", 00:08:25.274 "nbd_device": "/dev/nbd0" 00:08:25.274 }, 00:08:25.274 { 00:08:25.274 "bdev_name": "Malloc1", 00:08:25.274 "nbd_device": "/dev/nbd1" 00:08:25.274 } 00:08:25.274 ]' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.274 { 00:08:25.274 "bdev_name": "Malloc0", 00:08:25.274 "nbd_device": "/dev/nbd0" 00:08:25.274 }, 00:08:25.274 { 00:08:25.274 "bdev_name": "Malloc1", 00:08:25.274 "nbd_device": "/dev/nbd1" 00:08:25.274 } 00:08:25.274 ]' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.274 /dev/nbd1' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.274 /dev/nbd1' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.274 256+0 records in 00:08:25.274 256+0 records out 00:08:25.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00714015 s, 147 MB/s 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.274 256+0 records in 00:08:25.274 256+0 records out 00:08:25.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243082 s, 43.1 MB/s 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.274 256+0 records in 00:08:25.274 256+0 records out 00:08:25.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309311 s, 33.9 MB/s 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.274 14:22:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@41 -- # break 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.534 14:22:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@41 -- # break 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.792 14:22:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.048 14:22:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.049 14:22:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.049 14:22:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@65 -- # true 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.304 14:22:33 -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.304 14:22:33 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:26.561 14:22:33 -- event/event.sh@35 -- # sleep 3 00:08:26.818 [2024-12-06 14:22:33.659435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:26.818 [2024-12-06 14:22:33.773850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.818 [2024-12-06 14:22:33.773861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.075 [2024-12-06 14:22:33.833180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:27.075 [2024-12-06 14:22:33.833262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:29.600 14:22:36 -- event/event.sh@38 -- # waitforlisten 57104 /var/tmp/spdk-nbd.sock 00:08:29.600 14:22:36 -- common/autotest_common.sh@829 -- # '[' -z 57104 ']' 00:08:29.600 14:22:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:29.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:29.600 14:22:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.600 14:22:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:29.600 14:22:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.600 14:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:29.858 14:22:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.858 14:22:36 -- common/autotest_common.sh@862 -- # return 0 00:08:29.858 14:22:36 -- event/event.sh@39 -- # killprocess 57104 00:08:29.858 14:22:36 -- common/autotest_common.sh@936 -- # '[' -z 57104 ']' 00:08:29.858 14:22:36 -- common/autotest_common.sh@940 -- # kill -0 57104 00:08:29.858 14:22:36 -- common/autotest_common.sh@941 -- # uname 00:08:29.858 14:22:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:29.858 14:22:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57104 00:08:29.858 14:22:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:29.858 killing process with pid 57104 00:08:29.858 14:22:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:29.858 14:22:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57104' 00:08:29.858 14:22:36 -- common/autotest_common.sh@955 -- # kill 57104 00:08:29.858 14:22:36 -- common/autotest_common.sh@960 -- # wait 57104 00:08:30.116 spdk_app_start is called in Round 0. 00:08:30.116 Shutdown signal received, stop current app iteration 00:08:30.116 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:30.116 spdk_app_start is called in Round 1. 00:08:30.116 Shutdown signal received, stop current app iteration 00:08:30.116 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:30.116 spdk_app_start is called in Round 2. 00:08:30.116 Shutdown signal received, stop current app iteration 00:08:30.116 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:30.116 spdk_app_start is called in Round 3. 00:08:30.116 Shutdown signal received, stop current app iteration 00:08:30.116 14:22:37 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:30.116 14:22:37 -- event/event.sh@42 -- # return 0 00:08:30.116 00:08:30.116 real 0m21.335s 00:08:30.116 user 0m47.065s 00:08:30.116 sys 0m4.097s 00:08:30.116 14:22:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.116 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.116 ************************************ 00:08:30.116 END TEST app_repeat 00:08:30.116 ************************************ 00:08:30.375 14:22:37 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:30.375 14:22:37 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:30.375 14:22:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:30.375 14:22:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.375 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.375 ************************************ 00:08:30.375 START TEST cpu_locks 00:08:30.375 ************************************ 00:08:30.375 14:22:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:30.375 * Looking for test storage... 00:08:30.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:30.375 14:22:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:30.375 14:22:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:30.375 14:22:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:30.375 14:22:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:30.375 14:22:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:30.375 14:22:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:30.375 14:22:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:30.375 14:22:37 -- scripts/common.sh@335 -- # IFS=.-: 00:08:30.375 14:22:37 -- scripts/common.sh@335 -- # read -ra ver1 00:08:30.375 14:22:37 -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.375 14:22:37 -- scripts/common.sh@336 -- # read -ra ver2 00:08:30.375 14:22:37 -- scripts/common.sh@337 -- # local 'op=<' 00:08:30.375 14:22:37 -- scripts/common.sh@339 -- # ver1_l=2 00:08:30.375 14:22:37 -- scripts/common.sh@340 -- # ver2_l=1 00:08:30.375 14:22:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:30.375 14:22:37 -- scripts/common.sh@343 -- # case "$op" in 00:08:30.375 14:22:37 -- scripts/common.sh@344 -- # : 1 00:08:30.375 14:22:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:30.375 14:22:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.375 14:22:37 -- scripts/common.sh@364 -- # decimal 1 00:08:30.375 14:22:37 -- scripts/common.sh@352 -- # local d=1 00:08:30.375 14:22:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.375 14:22:37 -- scripts/common.sh@354 -- # echo 1 00:08:30.375 14:22:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:30.375 14:22:37 -- scripts/common.sh@365 -- # decimal 2 00:08:30.375 14:22:37 -- scripts/common.sh@352 -- # local d=2 00:08:30.375 14:22:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.375 14:22:37 -- scripts/common.sh@354 -- # echo 2 00:08:30.375 14:22:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:30.375 14:22:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:30.375 14:22:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:30.375 14:22:37 -- scripts/common.sh@367 -- # return 0 00:08:30.375 14:22:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.375 14:22:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.375 --rc genhtml_branch_coverage=1 00:08:30.375 --rc genhtml_function_coverage=1 00:08:30.375 --rc genhtml_legend=1 00:08:30.375 --rc geninfo_all_blocks=1 00:08:30.375 --rc geninfo_unexecuted_blocks=1 00:08:30.375 00:08:30.375 ' 00:08:30.375 14:22:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.375 --rc genhtml_branch_coverage=1 00:08:30.375 --rc genhtml_function_coverage=1 00:08:30.375 --rc genhtml_legend=1 00:08:30.375 --rc geninfo_all_blocks=1 00:08:30.375 --rc geninfo_unexecuted_blocks=1 00:08:30.375 00:08:30.375 ' 00:08:30.375 14:22:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.375 --rc genhtml_branch_coverage=1 00:08:30.375 --rc genhtml_function_coverage=1 00:08:30.375 --rc genhtml_legend=1 00:08:30.375 --rc geninfo_all_blocks=1 00:08:30.375 --rc geninfo_unexecuted_blocks=1 00:08:30.375 00:08:30.375 ' 00:08:30.375 14:22:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.375 --rc genhtml_branch_coverage=1 00:08:30.375 --rc genhtml_function_coverage=1 00:08:30.375 --rc genhtml_legend=1 00:08:30.375 --rc geninfo_all_blocks=1 00:08:30.375 --rc geninfo_unexecuted_blocks=1 00:08:30.375 00:08:30.375 ' 00:08:30.375 14:22:37 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:30.375 14:22:37 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:30.375 14:22:37 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:30.375 14:22:37 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:30.376 14:22:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:30.376 14:22:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.376 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 ************************************ 00:08:30.376 START TEST default_locks 00:08:30.376 ************************************ 00:08:30.376 14:22:37 -- common/autotest_common.sh@1114 -- # default_locks 00:08:30.376 14:22:37 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57758 00:08:30.376 14:22:37 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:30.376 14:22:37 -- event/cpu_locks.sh@47 -- # waitforlisten 57758 00:08:30.376 14:22:37 -- common/autotest_common.sh@829 -- # '[' -z 57758 ']' 00:08:30.376 14:22:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.376 14:22:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.376 14:22:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.376 14:22:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.376 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.633 [2024-12-06 14:22:37.375180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.633 [2024-12-06 14:22:37.375562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57758 ] 00:08:30.633 [2024-12-06 14:22:37.506838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.897 [2024-12-06 14:22:37.639221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.897 [2024-12-06 14:22:37.639397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.462 14:22:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.462 14:22:38 -- common/autotest_common.sh@862 -- # return 0 00:08:31.462 14:22:38 -- event/cpu_locks.sh@49 -- # locks_exist 57758 00:08:31.462 14:22:38 -- event/cpu_locks.sh@22 -- # lslocks -p 57758 00:08:31.462 14:22:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:32.028 14:22:38 -- event/cpu_locks.sh@50 -- # killprocess 57758 00:08:32.028 14:22:38 -- common/autotest_common.sh@936 -- # '[' -z 57758 ']' 00:08:32.028 14:22:38 -- common/autotest_common.sh@940 -- # kill -0 57758 00:08:32.028 14:22:38 -- common/autotest_common.sh@941 -- # uname 00:08:32.028 14:22:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:32.028 14:22:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57758 00:08:32.028 killing process with pid 57758 00:08:32.028 14:22:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:32.028 14:22:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:32.028 14:22:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57758' 00:08:32.028 14:22:38 -- common/autotest_common.sh@955 -- # kill 57758 00:08:32.028 14:22:38 -- common/autotest_common.sh@960 -- # wait 57758 00:08:32.595 14:22:39 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57758 00:08:32.595 14:22:39 -- common/autotest_common.sh@650 -- # local es=0 00:08:32.595 14:22:39 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57758 00:08:32.595 14:22:39 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:32.595 14:22:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.595 14:22:39 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:32.595 14:22:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.595 14:22:39 -- common/autotest_common.sh@653 -- # waitforlisten 57758 00:08:32.595 14:22:39 -- common/autotest_common.sh@829 -- # '[' -z 57758 ']' 00:08:32.595 14:22:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.595 14:22:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.595 14:22:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.596 14:22:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.596 ERROR: process (pid: 57758) is no longer running 00:08:32.596 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.596 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57758) - No such process 00:08:32.596 14:22:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.596 14:22:39 -- common/autotest_common.sh@862 -- # return 1 00:08:32.596 ************************************ 00:08:32.596 END TEST default_locks 00:08:32.596 ************************************ 00:08:32.596 14:22:39 -- common/autotest_common.sh@653 -- # es=1 00:08:32.596 14:22:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.596 14:22:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.596 14:22:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.596 14:22:39 -- event/cpu_locks.sh@54 -- # no_locks 00:08:32.596 14:22:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:32.596 14:22:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:32.596 14:22:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:32.596 00:08:32.596 real 0m2.155s 00:08:32.596 user 0m2.275s 00:08:32.596 sys 0m0.564s 00:08:32.596 14:22:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.596 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.596 14:22:39 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:32.596 14:22:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.596 14:22:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.596 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.596 ************************************ 00:08:32.596 START TEST default_locks_via_rpc 00:08:32.596 ************************************ 00:08:32.596 14:22:39 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:08:32.596 14:22:39 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57822 00:08:32.596 14:22:39 -- event/cpu_locks.sh@63 -- # waitforlisten 57822 00:08:32.596 14:22:39 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:32.596 14:22:39 -- common/autotest_common.sh@829 -- # '[' -z 57822 ']' 00:08:32.596 14:22:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.596 14:22:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.596 14:22:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.596 14:22:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.596 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.854 [2024-12-06 14:22:39.601588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.854 [2024-12-06 14:22:39.601745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57822 ] 00:08:32.854 [2024-12-06 14:22:39.745705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.112 [2024-12-06 14:22:39.904508] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.112 [2024-12-06 14:22:39.904762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.069 14:22:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.069 14:22:40 -- common/autotest_common.sh@862 -- # return 0 00:08:34.069 14:22:40 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:34.069 14:22:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.069 14:22:40 -- common/autotest_common.sh@10 -- # set +x 00:08:34.069 14:22:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.069 14:22:40 -- event/cpu_locks.sh@67 -- # no_locks 00:08:34.069 14:22:40 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:34.069 14:22:40 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:34.069 14:22:40 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:34.069 14:22:40 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:34.069 14:22:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.069 14:22:40 -- common/autotest_common.sh@10 -- # set +x 00:08:34.069 14:22:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.069 14:22:40 -- event/cpu_locks.sh@71 -- # locks_exist 57822 00:08:34.069 14:22:40 -- event/cpu_locks.sh@22 -- # lslocks -p 57822 00:08:34.069 14:22:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:34.327 14:22:41 -- event/cpu_locks.sh@73 -- # killprocess 57822 00:08:34.327 14:22:41 -- common/autotest_common.sh@936 -- # '[' -z 57822 ']' 00:08:34.327 14:22:41 -- common/autotest_common.sh@940 -- # kill -0 57822 00:08:34.327 14:22:41 -- common/autotest_common.sh@941 -- # uname 00:08:34.327 14:22:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.327 14:22:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57822 00:08:34.327 killing process with pid 57822 00:08:34.327 14:22:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.327 14:22:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.327 14:22:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57822' 00:08:34.327 14:22:41 -- common/autotest_common.sh@955 -- # kill 57822 00:08:34.327 14:22:41 -- common/autotest_common.sh@960 -- # wait 57822 00:08:34.893 ************************************ 00:08:34.893 END TEST default_locks_via_rpc 00:08:34.893 ************************************ 00:08:34.893 00:08:34.893 real 0m2.336s 00:08:34.893 user 0m2.496s 00:08:34.893 sys 0m0.720s 00:08:34.893 14:22:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.893 14:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.150 14:22:41 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:35.150 14:22:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.150 14:22:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.150 14:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.150 ************************************ 00:08:35.150 START TEST non_locking_app_on_locked_coremask 00:08:35.150 ************************************ 00:08:35.150 14:22:41 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:08:35.150 14:22:41 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57896 00:08:35.150 14:22:41 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:35.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.150 14:22:41 -- event/cpu_locks.sh@81 -- # waitforlisten 57896 /var/tmp/spdk.sock 00:08:35.150 14:22:41 -- common/autotest_common.sh@829 -- # '[' -z 57896 ']' 00:08:35.150 14:22:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.150 14:22:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.150 14:22:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.150 14:22:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.150 14:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.150 [2024-12-06 14:22:41.970356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.150 [2024-12-06 14:22:41.971208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57896 ] 00:08:35.150 [2024-12-06 14:22:42.108030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.407 [2024-12-06 14:22:42.279431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.407 [2024-12-06 14:22:42.279960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:36.340 14:22:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.340 14:22:42 -- common/autotest_common.sh@862 -- # return 0 00:08:36.340 14:22:42 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:36.340 14:22:42 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57924 00:08:36.340 14:22:42 -- event/cpu_locks.sh@85 -- # waitforlisten 57924 /var/tmp/spdk2.sock 00:08:36.340 14:22:42 -- common/autotest_common.sh@829 -- # '[' -z 57924 ']' 00:08:36.340 14:22:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:36.340 14:22:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.340 14:22:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:36.340 14:22:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.340 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:08:36.340 [2024-12-06 14:22:43.035439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.340 [2024-12-06 14:22:43.035873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57924 ] 00:08:36.340 [2024-12-06 14:22:43.181576] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:36.340 [2024-12-06 14:22:43.181668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.598 [2024-12-06 14:22:43.470201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.598 [2024-12-06 14:22:43.470440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.570 14:22:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.570 14:22:44 -- common/autotest_common.sh@862 -- # return 0 00:08:37.570 14:22:44 -- event/cpu_locks.sh@87 -- # locks_exist 57896 00:08:37.570 14:22:44 -- event/cpu_locks.sh@22 -- # lslocks -p 57896 00:08:37.570 14:22:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:38.137 14:22:44 -- event/cpu_locks.sh@89 -- # killprocess 57896 00:08:38.137 14:22:44 -- common/autotest_common.sh@936 -- # '[' -z 57896 ']' 00:08:38.137 14:22:44 -- common/autotest_common.sh@940 -- # kill -0 57896 00:08:38.137 14:22:44 -- common/autotest_common.sh@941 -- # uname 00:08:38.137 14:22:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:38.137 14:22:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57896 00:08:38.137 killing process with pid 57896 00:08:38.137 14:22:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:38.137 14:22:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:38.137 14:22:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57896' 00:08:38.137 14:22:44 -- common/autotest_common.sh@955 -- # kill 57896 00:08:38.137 14:22:44 -- common/autotest_common.sh@960 -- # wait 57896 00:08:39.069 14:22:45 -- event/cpu_locks.sh@90 -- # killprocess 57924 00:08:39.069 14:22:45 -- common/autotest_common.sh@936 -- # '[' -z 57924 ']' 00:08:39.069 14:22:45 -- common/autotest_common.sh@940 -- # kill -0 57924 00:08:39.069 14:22:45 -- common/autotest_common.sh@941 -- # uname 00:08:39.069 14:22:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.069 14:22:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57924 00:08:39.069 killing process with pid 57924 00:08:39.069 14:22:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.069 14:22:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.069 14:22:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57924' 00:08:39.069 14:22:45 -- common/autotest_common.sh@955 -- # kill 57924 00:08:39.069 14:22:45 -- common/autotest_common.sh@960 -- # wait 57924 00:08:40.003 00:08:40.003 real 0m4.710s 00:08:40.003 user 0m5.136s 00:08:40.003 sys 0m1.314s 00:08:40.003 14:22:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.003 ************************************ 00:08:40.003 END TEST non_locking_app_on_locked_coremask 00:08:40.003 ************************************ 00:08:40.003 14:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.003 14:22:46 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:40.003 14:22:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.003 14:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.003 14:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.003 ************************************ 00:08:40.003 START TEST locking_app_on_unlocked_coremask 00:08:40.003 ************************************ 00:08:40.003 14:22:46 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:08:40.003 14:22:46 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58009 00:08:40.003 14:22:46 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:40.003 14:22:46 -- event/cpu_locks.sh@99 -- # waitforlisten 58009 /var/tmp/spdk.sock 00:08:40.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.003 14:22:46 -- common/autotest_common.sh@829 -- # '[' -z 58009 ']' 00:08:40.003 14:22:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.003 14:22:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.003 14:22:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.003 14:22:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.003 14:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.003 [2024-12-06 14:22:46.749116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.003 [2024-12-06 14:22:46.749649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58009 ] 00:08:40.003 [2024-12-06 14:22:46.891627] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:40.003 [2024-12-06 14:22:46.892070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.261 [2024-12-06 14:22:47.081047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.261 [2024-12-06 14:22:47.081276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:41.195 14:22:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.195 14:22:47 -- common/autotest_common.sh@862 -- # return 0 00:08:41.195 14:22:47 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58037 00:08:41.195 14:22:47 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:41.195 14:22:47 -- event/cpu_locks.sh@103 -- # waitforlisten 58037 /var/tmp/spdk2.sock 00:08:41.196 14:22:47 -- common/autotest_common.sh@829 -- # '[' -z 58037 ']' 00:08:41.196 14:22:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:41.196 14:22:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.196 14:22:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:41.196 14:22:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.196 14:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 [2024-12-06 14:22:47.887111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.196 [2024-12-06 14:22:47.887276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58037 ] 00:08:41.196 [2024-12-06 14:22:48.037963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.454 [2024-12-06 14:22:48.376112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.454 [2024-12-06 14:22:48.376320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.831 14:22:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.831 14:22:49 -- common/autotest_common.sh@862 -- # return 0 00:08:42.831 14:22:49 -- event/cpu_locks.sh@105 -- # locks_exist 58037 00:08:42.831 14:22:49 -- event/cpu_locks.sh@22 -- # lslocks -p 58037 00:08:42.831 14:22:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.397 14:22:50 -- event/cpu_locks.sh@107 -- # killprocess 58009 00:08:43.397 14:22:50 -- common/autotest_common.sh@936 -- # '[' -z 58009 ']' 00:08:43.397 14:22:50 -- common/autotest_common.sh@940 -- # kill -0 58009 00:08:43.397 14:22:50 -- common/autotest_common.sh@941 -- # uname 00:08:43.397 14:22:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:43.397 14:22:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58009 00:08:43.655 14:22:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:43.655 killing process with pid 58009 00:08:43.655 14:22:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:43.655 14:22:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58009' 00:08:43.655 14:22:50 -- common/autotest_common.sh@955 -- # kill 58009 00:08:43.655 14:22:50 -- common/autotest_common.sh@960 -- # wait 58009 00:08:45.555 14:22:52 -- event/cpu_locks.sh@108 -- # killprocess 58037 00:08:45.555 14:22:52 -- common/autotest_common.sh@936 -- # '[' -z 58037 ']' 00:08:45.555 14:22:52 -- common/autotest_common.sh@940 -- # kill -0 58037 00:08:45.555 14:22:52 -- common/autotest_common.sh@941 -- # uname 00:08:45.555 14:22:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.555 14:22:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58037 00:08:45.555 killing process with pid 58037 00:08:45.555 14:22:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.555 14:22:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.555 14:22:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58037' 00:08:45.555 14:22:52 -- common/autotest_common.sh@955 -- # kill 58037 00:08:45.555 14:22:52 -- common/autotest_common.sh@960 -- # wait 58037 00:08:46.125 ************************************ 00:08:46.125 END TEST locking_app_on_unlocked_coremask 00:08:46.125 ************************************ 00:08:46.125 00:08:46.125 real 0m6.254s 00:08:46.125 user 0m6.748s 00:08:46.125 sys 0m1.505s 00:08:46.125 14:22:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.125 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.125 14:22:52 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:46.125 14:22:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.125 14:22:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.125 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.125 ************************************ 00:08:46.125 START TEST locking_app_on_locked_coremask 00:08:46.125 ************************************ 00:08:46.125 14:22:52 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:08:46.125 14:22:52 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58146 00:08:46.125 14:22:52 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:46.125 14:22:52 -- event/cpu_locks.sh@116 -- # waitforlisten 58146 /var/tmp/spdk.sock 00:08:46.125 14:22:52 -- common/autotest_common.sh@829 -- # '[' -z 58146 ']' 00:08:46.125 14:22:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.125 14:22:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.125 14:22:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.125 14:22:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.125 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.125 [2024-12-06 14:22:53.047629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.125 [2024-12-06 14:22:53.047741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:08:46.387 [2024-12-06 14:22:53.184131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.645 [2024-12-06 14:22:53.355911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.645 [2024-12-06 14:22:53.356111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.234 14:22:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.234 14:22:54 -- common/autotest_common.sh@862 -- # return 0 00:08:47.235 14:22:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58174 00:08:47.235 14:22:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58174 /var/tmp/spdk2.sock 00:08:47.235 14:22:54 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:47.235 14:22:54 -- common/autotest_common.sh@650 -- # local es=0 00:08:47.235 14:22:54 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58174 /var/tmp/spdk2.sock 00:08:47.235 14:22:54 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:47.235 14:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.235 14:22:54 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:47.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:47.235 14:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.235 14:22:54 -- common/autotest_common.sh@653 -- # waitforlisten 58174 /var/tmp/spdk2.sock 00:08:47.235 14:22:54 -- common/autotest_common.sh@829 -- # '[' -z 58174 ']' 00:08:47.235 14:22:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:47.235 14:22:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.235 14:22:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:47.235 14:22:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.235 14:22:54 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 [2024-12-06 14:22:54.125504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.235 [2024-12-06 14:22:54.125668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58174 ] 00:08:47.492 [2024-12-06 14:22:54.278668] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58146 has claimed it. 00:08:47.492 [2024-12-06 14:22:54.278784] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:48.057 ERROR: process (pid: 58174) is no longer running 00:08:48.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58174) - No such process 00:08:48.058 14:22:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.058 14:22:54 -- common/autotest_common.sh@862 -- # return 1 00:08:48.058 14:22:54 -- common/autotest_common.sh@653 -- # es=1 00:08:48.058 14:22:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.058 14:22:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.058 14:22:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.058 14:22:54 -- event/cpu_locks.sh@122 -- # locks_exist 58146 00:08:48.058 14:22:54 -- event/cpu_locks.sh@22 -- # lslocks -p 58146 00:08:48.058 14:22:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:48.314 14:22:55 -- event/cpu_locks.sh@124 -- # killprocess 58146 00:08:48.314 14:22:55 -- common/autotest_common.sh@936 -- # '[' -z 58146 ']' 00:08:48.314 14:22:55 -- common/autotest_common.sh@940 -- # kill -0 58146 00:08:48.314 14:22:55 -- common/autotest_common.sh@941 -- # uname 00:08:48.314 14:22:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:48.314 14:22:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58146 00:08:48.314 14:22:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:48.314 killing process with pid 58146 00:08:48.314 14:22:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:48.314 14:22:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58146' 00:08:48.314 14:22:55 -- common/autotest_common.sh@955 -- # kill 58146 00:08:48.314 14:22:55 -- common/autotest_common.sh@960 -- # wait 58146 00:08:49.245 00:08:49.245 real 0m3.020s 00:08:49.245 user 0m3.294s 00:08:49.245 sys 0m0.844s 00:08:49.245 14:22:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.245 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 ************************************ 00:08:49.245 END TEST locking_app_on_locked_coremask 00:08:49.245 ************************************ 00:08:49.245 14:22:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:49.245 14:22:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.245 14:22:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.245 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 ************************************ 00:08:49.245 START TEST locking_overlapped_coremask 00:08:49.245 ************************************ 00:08:49.245 14:22:56 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:08:49.245 14:22:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58231 00:08:49.245 14:22:56 -- event/cpu_locks.sh@133 -- # waitforlisten 58231 /var/tmp/spdk.sock 00:08:49.245 14:22:56 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:49.245 14:22:56 -- common/autotest_common.sh@829 -- # '[' -z 58231 ']' 00:08:49.245 14:22:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.245 14:22:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.245 14:22:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.245 14:22:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.245 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 [2024-12-06 14:22:56.126293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.245 [2024-12-06 14:22:56.126491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58231 ] 00:08:49.502 [2024-12-06 14:22:56.273845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.502 [2024-12-06 14:22:56.462976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.502 [2024-12-06 14:22:56.463636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.502 [2024-12-06 14:22:56.463734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.502 [2024-12-06 14:22:56.463718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.433 14:22:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.433 14:22:57 -- common/autotest_common.sh@862 -- # return 0 00:08:50.433 14:22:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58261 00:08:50.433 14:22:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58261 /var/tmp/spdk2.sock 00:08:50.433 14:22:57 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:50.433 14:22:57 -- common/autotest_common.sh@650 -- # local es=0 00:08:50.433 14:22:57 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58261 /var/tmp/spdk2.sock 00:08:50.433 14:22:57 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:50.433 14:22:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.433 14:22:57 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:50.433 14:22:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.433 14:22:57 -- common/autotest_common.sh@653 -- # waitforlisten 58261 /var/tmp/spdk2.sock 00:08:50.433 14:22:57 -- common/autotest_common.sh@829 -- # '[' -z 58261 ']' 00:08:50.433 14:22:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:50.433 14:22:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.433 14:22:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:50.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:50.433 14:22:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.433 14:22:57 -- common/autotest_common.sh@10 -- # set +x 00:08:50.433 [2024-12-06 14:22:57.175652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.433 [2024-12-06 14:22:57.176062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58261 ] 00:08:50.433 [2024-12-06 14:22:57.325832] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58231 has claimed it. 00:08:50.433 [2024-12-06 14:22:57.325928] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:50.997 ERROR: process (pid: 58261) is no longer running 00:08:50.997 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58261) - No such process 00:08:50.997 14:22:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.997 14:22:57 -- common/autotest_common.sh@862 -- # return 1 00:08:50.997 14:22:57 -- common/autotest_common.sh@653 -- # es=1 00:08:50.997 14:22:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.997 14:22:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.997 14:22:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.997 14:22:57 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:50.997 14:22:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:50.997 14:22:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:50.997 14:22:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:50.997 14:22:57 -- event/cpu_locks.sh@141 -- # killprocess 58231 00:08:50.997 14:22:57 -- common/autotest_common.sh@936 -- # '[' -z 58231 ']' 00:08:50.997 14:22:57 -- common/autotest_common.sh@940 -- # kill -0 58231 00:08:50.997 14:22:57 -- common/autotest_common.sh@941 -- # uname 00:08:50.997 14:22:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.997 14:22:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58231 00:08:50.997 14:22:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:50.997 14:22:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:50.997 14:22:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58231' 00:08:50.997 killing process with pid 58231 00:08:50.997 14:22:57 -- common/autotest_common.sh@955 -- # kill 58231 00:08:50.997 14:22:57 -- common/autotest_common.sh@960 -- # wait 58231 00:08:51.942 00:08:51.942 real 0m2.722s 00:08:51.942 user 0m7.147s 00:08:51.942 sys 0m0.590s 00:08:51.942 ************************************ 00:08:51.942 END TEST locking_overlapped_coremask 00:08:51.942 ************************************ 00:08:51.942 14:22:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.942 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:08:51.942 14:22:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:51.942 14:22:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.942 14:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.942 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:08:51.942 ************************************ 00:08:51.942 START TEST locking_overlapped_coremask_via_rpc 00:08:51.942 ************************************ 00:08:51.942 14:22:58 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:08:51.942 14:22:58 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58318 00:08:51.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.942 14:22:58 -- event/cpu_locks.sh@149 -- # waitforlisten 58318 /var/tmp/spdk.sock 00:08:51.942 14:22:58 -- common/autotest_common.sh@829 -- # '[' -z 58318 ']' 00:08:51.942 14:22:58 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:51.942 14:22:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.942 14:22:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.942 14:22:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.942 14:22:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.942 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:08:51.942 [2024-12-06 14:22:58.887538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.942 [2024-12-06 14:22:58.887654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:08:52.200 [2024-12-06 14:22:59.021582] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:52.200 [2024-12-06 14:22:59.021653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.200 [2024-12-06 14:22:59.166775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.200 [2024-12-06 14:22:59.167132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.200 [2024-12-06 14:22:59.167827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.458 [2024-12-06 14:22:59.167877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.022 14:22:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.022 14:22:59 -- common/autotest_common.sh@862 -- # return 0 00:08:53.022 14:22:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58348 00:08:53.022 14:22:59 -- event/cpu_locks.sh@153 -- # waitforlisten 58348 /var/tmp/spdk2.sock 00:08:53.022 14:22:59 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:53.022 14:22:59 -- common/autotest_common.sh@829 -- # '[' -z 58348 ']' 00:08:53.022 14:22:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.022 14:22:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.022 14:22:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.022 14:22:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.022 14:22:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.022 [2024-12-06 14:22:59.959469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.022 [2024-12-06 14:22:59.960628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58348 ] 00:08:53.279 [2024-12-06 14:23:00.117014] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:53.279 [2024-12-06 14:23:00.117093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.536 [2024-12-06 14:23:00.380718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.536 [2024-12-06 14:23:00.381021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.536 [2024-12-06 14:23:00.381166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.536 [2024-12-06 14:23:00.381305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.102 14:23:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.102 14:23:00 -- common/autotest_common.sh@862 -- # return 0 00:08:54.102 14:23:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:54.102 14:23:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.102 14:23:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 14:23:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.102 14:23:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:54.102 14:23:00 -- common/autotest_common.sh@650 -- # local es=0 00:08:54.102 14:23:00 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:54.102 14:23:00 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:54.102 14:23:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.102 14:23:00 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:54.102 14:23:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.102 14:23:00 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:54.102 14:23:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.102 14:23:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 [2024-12-06 14:23:00.941648] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58318 has claimed it. 00:08:54.102 2024/12/06 14:23:00 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:08:54.102 request: 00:08:54.102 { 00:08:54.102 "method": "framework_enable_cpumask_locks", 00:08:54.102 "params": {} 00:08:54.102 } 00:08:54.102 Got JSON-RPC error response 00:08:54.102 GoRPCClient: error on JSON-RPC call 00:08:54.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.102 14:23:00 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:54.103 14:23:00 -- common/autotest_common.sh@653 -- # es=1 00:08:54.103 14:23:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.103 14:23:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:54.103 14:23:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.103 14:23:00 -- event/cpu_locks.sh@158 -- # waitforlisten 58318 /var/tmp/spdk.sock 00:08:54.103 14:23:00 -- common/autotest_common.sh@829 -- # '[' -z 58318 ']' 00:08:54.103 14:23:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.103 14:23:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.103 14:23:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.103 14:23:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.103 14:23:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.360 14:23:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.360 14:23:01 -- common/autotest_common.sh@862 -- # return 0 00:08:54.360 14:23:01 -- event/cpu_locks.sh@159 -- # waitforlisten 58348 /var/tmp/spdk2.sock 00:08:54.360 14:23:01 -- common/autotest_common.sh@829 -- # '[' -z 58348 ']' 00:08:54.360 14:23:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.361 14:23:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.361 14:23:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.361 14:23:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.361 14:23:01 -- common/autotest_common.sh@10 -- # set +x 00:08:54.619 ************************************ 00:08:54.619 END TEST locking_overlapped_coremask_via_rpc 00:08:54.619 ************************************ 00:08:54.619 14:23:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.619 14:23:01 -- common/autotest_common.sh@862 -- # return 0 00:08:54.619 14:23:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:54.619 14:23:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:54.619 14:23:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:54.619 14:23:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:54.619 00:08:54.619 real 0m2.688s 00:08:54.619 user 0m1.400s 00:08:54.619 sys 0m0.229s 00:08:54.619 14:23:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.619 14:23:01 -- common/autotest_common.sh@10 -- # set +x 00:08:54.619 14:23:01 -- event/cpu_locks.sh@174 -- # cleanup 00:08:54.619 14:23:01 -- event/cpu_locks.sh@15 -- # [[ -z 58318 ]] 00:08:54.619 14:23:01 -- event/cpu_locks.sh@15 -- # killprocess 58318 00:08:54.619 14:23:01 -- common/autotest_common.sh@936 -- # '[' -z 58318 ']' 00:08:54.619 14:23:01 -- common/autotest_common.sh@940 -- # kill -0 58318 00:08:54.619 14:23:01 -- common/autotest_common.sh@941 -- # uname 00:08:54.619 14:23:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:54.619 14:23:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58318 00:08:54.619 killing process with pid 58318 00:08:54.619 14:23:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:54.619 14:23:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:54.619 14:23:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58318' 00:08:54.619 14:23:01 -- common/autotest_common.sh@955 -- # kill 58318 00:08:54.619 14:23:01 -- common/autotest_common.sh@960 -- # wait 58318 00:08:55.553 14:23:02 -- event/cpu_locks.sh@16 -- # [[ -z 58348 ]] 00:08:55.553 14:23:02 -- event/cpu_locks.sh@16 -- # killprocess 58348 00:08:55.553 14:23:02 -- common/autotest_common.sh@936 -- # '[' -z 58348 ']' 00:08:55.553 14:23:02 -- common/autotest_common.sh@940 -- # kill -0 58348 00:08:55.553 14:23:02 -- common/autotest_common.sh@941 -- # uname 00:08:55.553 14:23:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.553 14:23:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58348 00:08:55.553 killing process with pid 58348 00:08:55.553 14:23:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:08:55.553 14:23:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:08:55.553 14:23:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58348' 00:08:55.553 14:23:02 -- common/autotest_common.sh@955 -- # kill 58348 00:08:55.553 14:23:02 -- common/autotest_common.sh@960 -- # wait 58348 00:08:55.812 14:23:02 -- event/cpu_locks.sh@18 -- # rm -f 00:08:55.812 Process with pid 58318 is not found 00:08:55.812 Process with pid 58348 is not found 00:08:55.812 14:23:02 -- event/cpu_locks.sh@1 -- # cleanup 00:08:55.812 14:23:02 -- event/cpu_locks.sh@15 -- # [[ -z 58318 ]] 00:08:55.812 14:23:02 -- event/cpu_locks.sh@15 -- # killprocess 58318 00:08:55.812 14:23:02 -- common/autotest_common.sh@936 -- # '[' -z 58318 ']' 00:08:55.812 14:23:02 -- common/autotest_common.sh@940 -- # kill -0 58318 00:08:55.812 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58318) - No such process 00:08:55.812 14:23:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58318 is not found' 00:08:55.812 14:23:02 -- event/cpu_locks.sh@16 -- # [[ -z 58348 ]] 00:08:55.812 14:23:02 -- event/cpu_locks.sh@16 -- # killprocess 58348 00:08:55.812 14:23:02 -- common/autotest_common.sh@936 -- # '[' -z 58348 ']' 00:08:55.812 14:23:02 -- common/autotest_common.sh@940 -- # kill -0 58348 00:08:55.812 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58348) - No such process 00:08:55.812 14:23:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58348 is not found' 00:08:55.812 14:23:02 -- event/cpu_locks.sh@18 -- # rm -f 00:08:55.812 00:08:55.812 real 0m25.641s 00:08:55.812 user 0m42.380s 00:08:55.812 sys 0m6.823s 00:08:55.812 14:23:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.812 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.812 ************************************ 00:08:55.812 END TEST cpu_locks 00:08:55.813 ************************************ 00:08:56.071 ************************************ 00:08:56.071 END TEST event 00:08:56.071 ************************************ 00:08:56.071 00:08:56.071 real 0m58.507s 00:08:56.071 user 1m47.144s 00:08:56.071 sys 0m12.301s 00:08:56.071 14:23:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.071 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:56.071 14:23:02 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:56.071 14:23:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.071 14:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.071 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:56.071 ************************************ 00:08:56.071 START TEST thread 00:08:56.071 ************************************ 00:08:56.071 14:23:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:56.071 * Looking for test storage... 00:08:56.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:56.071 14:23:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:56.071 14:23:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:56.071 14:23:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:56.071 14:23:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:56.071 14:23:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:56.071 14:23:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:56.071 14:23:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:56.071 14:23:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:56.071 14:23:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:56.071 14:23:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.071 14:23:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:56.071 14:23:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:56.071 14:23:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:56.071 14:23:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:56.071 14:23:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:56.071 14:23:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:56.071 14:23:03 -- scripts/common.sh@344 -- # : 1 00:08:56.071 14:23:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:56.071 14:23:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.071 14:23:03 -- scripts/common.sh@364 -- # decimal 1 00:08:56.071 14:23:03 -- scripts/common.sh@352 -- # local d=1 00:08:56.071 14:23:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.071 14:23:03 -- scripts/common.sh@354 -- # echo 1 00:08:56.071 14:23:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:56.330 14:23:03 -- scripts/common.sh@365 -- # decimal 2 00:08:56.330 14:23:03 -- scripts/common.sh@352 -- # local d=2 00:08:56.330 14:23:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.330 14:23:03 -- scripts/common.sh@354 -- # echo 2 00:08:56.330 14:23:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:56.330 14:23:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:56.330 14:23:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:56.330 14:23:03 -- scripts/common.sh@367 -- # return 0 00:08:56.330 14:23:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.330 14:23:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:56.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.330 --rc genhtml_branch_coverage=1 00:08:56.330 --rc genhtml_function_coverage=1 00:08:56.330 --rc genhtml_legend=1 00:08:56.330 --rc geninfo_all_blocks=1 00:08:56.330 --rc geninfo_unexecuted_blocks=1 00:08:56.330 00:08:56.330 ' 00:08:56.330 14:23:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:56.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.330 --rc genhtml_branch_coverage=1 00:08:56.330 --rc genhtml_function_coverage=1 00:08:56.330 --rc genhtml_legend=1 00:08:56.330 --rc geninfo_all_blocks=1 00:08:56.330 --rc geninfo_unexecuted_blocks=1 00:08:56.330 00:08:56.330 ' 00:08:56.330 14:23:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:56.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.330 --rc genhtml_branch_coverage=1 00:08:56.330 --rc genhtml_function_coverage=1 00:08:56.330 --rc genhtml_legend=1 00:08:56.330 --rc geninfo_all_blocks=1 00:08:56.330 --rc geninfo_unexecuted_blocks=1 00:08:56.330 00:08:56.330 ' 00:08:56.330 14:23:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:56.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.330 --rc genhtml_branch_coverage=1 00:08:56.330 --rc genhtml_function_coverage=1 00:08:56.330 --rc genhtml_legend=1 00:08:56.330 --rc geninfo_all_blocks=1 00:08:56.330 --rc geninfo_unexecuted_blocks=1 00:08:56.330 00:08:56.330 ' 00:08:56.330 14:23:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:56.330 14:23:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:56.330 14:23:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.330 14:23:03 -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 ************************************ 00:08:56.330 START TEST thread_poller_perf 00:08:56.330 ************************************ 00:08:56.330 14:23:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:56.330 [2024-12-06 14:23:03.075449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.330 [2024-12-06 14:23:03.075698] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58507 ] 00:08:56.330 [2024-12-06 14:23:03.215912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.588 [2024-12-06 14:23:03.376034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.588 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:57.981 [2024-12-06T14:23:04.951Z] ====================================== 00:08:57.981 [2024-12-06T14:23:04.951Z] busy:2209837107 (cyc) 00:08:57.981 [2024-12-06T14:23:04.951Z] total_run_count: 300000 00:08:57.981 [2024-12-06T14:23:04.951Z] tsc_hz: 2200000000 (cyc) 00:08:57.981 [2024-12-06T14:23:04.951Z] ====================================== 00:08:57.981 [2024-12-06T14:23:04.951Z] poller_cost: 7366 (cyc), 3348 (nsec) 00:08:57.981 ************************************ 00:08:57.981 END TEST thread_poller_perf 00:08:57.982 ************************************ 00:08:57.982 00:08:57.982 real 0m1.487s 00:08:57.982 user 0m1.309s 00:08:57.982 sys 0m0.067s 00:08:57.982 14:23:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.982 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:08:57.982 14:23:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:57.982 14:23:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:57.982 14:23:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.982 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:08:57.982 ************************************ 00:08:57.982 START TEST thread_poller_perf 00:08:57.982 ************************************ 00:08:57.982 14:23:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:57.982 [2024-12-06 14:23:04.615969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.982 [2024-12-06 14:23:04.616069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58543 ] 00:08:57.982 [2024-12-06 14:23:04.755379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.982 [2024-12-06 14:23:04.908402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.982 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:59.384 [2024-12-06T14:23:06.354Z] ====================================== 00:08:59.384 [2024-12-06T14:23:06.354Z] busy:2203190993 (cyc) 00:08:59.384 [2024-12-06T14:23:06.354Z] total_run_count: 4138000 00:08:59.384 [2024-12-06T14:23:06.354Z] tsc_hz: 2200000000 (cyc) 00:08:59.384 [2024-12-06T14:23:06.354Z] ====================================== 00:08:59.384 [2024-12-06T14:23:06.354Z] poller_cost: 532 (cyc), 241 (nsec) 00:08:59.384 00:08:59.384 real 0m1.468s 00:08:59.384 user 0m1.279s 00:08:59.384 sys 0m0.080s 00:08:59.384 ************************************ 00:08:59.384 END TEST thread_poller_perf 00:08:59.384 ************************************ 00:08:59.384 14:23:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.384 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:08:59.384 14:23:06 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:59.384 00:08:59.384 real 0m3.250s 00:08:59.384 user 0m2.741s 00:08:59.384 sys 0m0.287s 00:08:59.384 ************************************ 00:08:59.384 END TEST thread 00:08:59.384 ************************************ 00:08:59.384 14:23:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.384 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:08:59.384 14:23:06 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:59.384 14:23:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.384 14:23:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.384 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:08:59.384 ************************************ 00:08:59.384 START TEST accel 00:08:59.384 ************************************ 00:08:59.384 14:23:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:59.384 * Looking for test storage... 00:08:59.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:59.384 14:23:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:59.384 14:23:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:59.384 14:23:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:59.384 14:23:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:59.384 14:23:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:59.384 14:23:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:59.384 14:23:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:59.384 14:23:06 -- scripts/common.sh@335 -- # IFS=.-: 00:08:59.384 14:23:06 -- scripts/common.sh@335 -- # read -ra ver1 00:08:59.384 14:23:06 -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.384 14:23:06 -- scripts/common.sh@336 -- # read -ra ver2 00:08:59.384 14:23:06 -- scripts/common.sh@337 -- # local 'op=<' 00:08:59.384 14:23:06 -- scripts/common.sh@339 -- # ver1_l=2 00:08:59.384 14:23:06 -- scripts/common.sh@340 -- # ver2_l=1 00:08:59.384 14:23:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:59.384 14:23:06 -- scripts/common.sh@343 -- # case "$op" in 00:08:59.384 14:23:06 -- scripts/common.sh@344 -- # : 1 00:08:59.384 14:23:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:59.384 14:23:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.384 14:23:06 -- scripts/common.sh@364 -- # decimal 1 00:08:59.384 14:23:06 -- scripts/common.sh@352 -- # local d=1 00:08:59.384 14:23:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.384 14:23:06 -- scripts/common.sh@354 -- # echo 1 00:08:59.384 14:23:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:59.384 14:23:06 -- scripts/common.sh@365 -- # decimal 2 00:08:59.384 14:23:06 -- scripts/common.sh@352 -- # local d=2 00:08:59.384 14:23:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.384 14:23:06 -- scripts/common.sh@354 -- # echo 2 00:08:59.384 14:23:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:59.384 14:23:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:59.384 14:23:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:59.384 14:23:06 -- scripts/common.sh@367 -- # return 0 00:08:59.384 14:23:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.384 14:23:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.384 --rc genhtml_branch_coverage=1 00:08:59.384 --rc genhtml_function_coverage=1 00:08:59.384 --rc genhtml_legend=1 00:08:59.384 --rc geninfo_all_blocks=1 00:08:59.384 --rc geninfo_unexecuted_blocks=1 00:08:59.384 00:08:59.384 ' 00:08:59.384 14:23:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.384 --rc genhtml_branch_coverage=1 00:08:59.384 --rc genhtml_function_coverage=1 00:08:59.384 --rc genhtml_legend=1 00:08:59.384 --rc geninfo_all_blocks=1 00:08:59.384 --rc geninfo_unexecuted_blocks=1 00:08:59.384 00:08:59.384 ' 00:08:59.384 14:23:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.384 --rc genhtml_branch_coverage=1 00:08:59.384 --rc genhtml_function_coverage=1 00:08:59.384 --rc genhtml_legend=1 00:08:59.384 --rc geninfo_all_blocks=1 00:08:59.384 --rc geninfo_unexecuted_blocks=1 00:08:59.384 00:08:59.384 ' 00:08:59.385 14:23:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.385 --rc genhtml_branch_coverage=1 00:08:59.385 --rc genhtml_function_coverage=1 00:08:59.385 --rc genhtml_legend=1 00:08:59.385 --rc geninfo_all_blocks=1 00:08:59.385 --rc geninfo_unexecuted_blocks=1 00:08:59.385 00:08:59.385 ' 00:08:59.385 14:23:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:08:59.385 14:23:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:08:59.385 14:23:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:59.385 14:23:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=58624 00:08:59.385 14:23:06 -- accel/accel.sh@60 -- # waitforlisten 58624 00:08:59.385 14:23:06 -- common/autotest_common.sh@829 -- # '[' -z 58624 ']' 00:08:59.385 14:23:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:59.385 14:23:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.385 14:23:06 -- accel/accel.sh@58 -- # build_accel_config 00:08:59.385 14:23:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.385 14:23:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:59.385 14:23:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.385 14:23:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.385 14:23:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.385 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:08:59.385 14:23:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.385 14:23:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:59.385 14:23:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:59.385 14:23:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:59.385 14:23:06 -- accel/accel.sh@42 -- # jq -r . 00:08:59.643 [2024-12-06 14:23:06.388739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.643 [2024-12-06 14:23:06.389047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58624 ] 00:08:59.643 [2024-12-06 14:23:06.524534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.901 [2024-12-06 14:23:06.682159] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.901 [2024-12-06 14:23:06.682647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.468 14:23:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.468 14:23:07 -- common/autotest_common.sh@862 -- # return 0 00:09:00.468 14:23:07 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:00.468 14:23:07 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:00.468 14:23:07 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:00.468 14:23:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.468 14:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:00.726 14:23:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.726 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.726 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.726 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.726 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.726 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.726 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.726 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.726 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.726 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.726 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.726 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.726 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.726 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # IFS== 00:09:00.727 14:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:00.727 14:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:00.727 14:23:07 -- accel/accel.sh@67 -- # killprocess 58624 00:09:00.727 14:23:07 -- common/autotest_common.sh@936 -- # '[' -z 58624 ']' 00:09:00.727 14:23:07 -- common/autotest_common.sh@940 -- # kill -0 58624 00:09:00.727 14:23:07 -- common/autotest_common.sh@941 -- # uname 00:09:00.727 14:23:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.727 14:23:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58624 00:09:00.727 14:23:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.727 14:23:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.727 14:23:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58624' 00:09:00.727 killing process with pid 58624 00:09:00.727 14:23:07 -- common/autotest_common.sh@955 -- # kill 58624 00:09:00.727 14:23:07 -- common/autotest_common.sh@960 -- # wait 58624 00:09:01.292 14:23:07 -- accel/accel.sh@68 -- # trap - ERR 00:09:01.292 14:23:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:01.292 14:23:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:01.292 14:23:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.292 14:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.292 14:23:07 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:09:01.292 14:23:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:01.292 14:23:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:01.292 14:23:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:01.292 14:23:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.292 14:23:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.292 14:23:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:01.292 14:23:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:01.292 14:23:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:01.292 14:23:07 -- accel/accel.sh@42 -- # jq -r . 00:09:01.292 14:23:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.292 14:23:08 -- common/autotest_common.sh@10 -- # set +x 00:09:01.292 14:23:08 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:01.292 14:23:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:01.292 14:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.292 14:23:08 -- common/autotest_common.sh@10 -- # set +x 00:09:01.292 ************************************ 00:09:01.292 START TEST accel_missing_filename 00:09:01.292 ************************************ 00:09:01.292 14:23:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:09:01.292 14:23:08 -- common/autotest_common.sh@650 -- # local es=0 00:09:01.292 14:23:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:01.292 14:23:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:01.292 14:23:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.292 14:23:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:01.292 14:23:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.292 14:23:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:09:01.292 14:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:01.292 14:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:01.292 14:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:01.292 14:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.292 14:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.292 14:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:01.292 14:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:01.292 14:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:01.292 14:23:08 -- accel/accel.sh@42 -- # jq -r . 00:09:01.292 [2024-12-06 14:23:08.102869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.292 [2024-12-06 14:23:08.102976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:09:01.292 [2024-12-06 14:23:08.241185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.550 [2024-12-06 14:23:08.354548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.550 [2024-12-06 14:23:08.411315] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.550 [2024-12-06 14:23:08.488782] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:01.808 A filename is required. 00:09:01.808 14:23:08 -- common/autotest_common.sh@653 -- # es=234 00:09:01.808 14:23:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.808 14:23:08 -- common/autotest_common.sh@662 -- # es=106 00:09:01.808 ************************************ 00:09:01.808 END TEST accel_missing_filename 00:09:01.808 ************************************ 00:09:01.808 14:23:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:01.808 14:23:08 -- common/autotest_common.sh@670 -- # es=1 00:09:01.808 14:23:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.808 00:09:01.808 real 0m0.518s 00:09:01.808 user 0m0.352s 00:09:01.808 sys 0m0.111s 00:09:01.808 14:23:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.808 14:23:08 -- common/autotest_common.sh@10 -- # set +x 00:09:01.808 14:23:08 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:01.808 14:23:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:01.808 14:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.808 14:23:08 -- common/autotest_common.sh@10 -- # set +x 00:09:01.808 ************************************ 00:09:01.808 START TEST accel_compress_verify 00:09:01.808 ************************************ 00:09:01.808 14:23:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:01.808 14:23:08 -- common/autotest_common.sh@650 -- # local es=0 00:09:01.808 14:23:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:01.808 14:23:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:01.808 14:23:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.808 14:23:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:01.808 14:23:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.809 14:23:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:01.809 14:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:01.809 14:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:01.809 14:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:01.809 14:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.809 14:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.809 14:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:01.809 14:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:01.809 14:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:01.809 14:23:08 -- accel/accel.sh@42 -- # jq -r . 00:09:01.809 [2024-12-06 14:23:08.678001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.809 [2024-12-06 14:23:08.678128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58724 ] 00:09:02.067 [2024-12-06 14:23:08.820870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.067 [2024-12-06 14:23:08.977209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.325 [2024-12-06 14:23:09.057450] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.325 [2024-12-06 14:23:09.175138] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:02.583 00:09:02.583 Compression does not support the verify option, aborting. 00:09:02.583 14:23:09 -- common/autotest_common.sh@653 -- # es=161 00:09:02.583 14:23:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.583 14:23:09 -- common/autotest_common.sh@662 -- # es=33 00:09:02.583 14:23:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:02.583 14:23:09 -- common/autotest_common.sh@670 -- # es=1 00:09:02.583 14:23:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.583 00:09:02.583 real 0m0.682s 00:09:02.583 user 0m0.478s 00:09:02.583 sys 0m0.152s 00:09:02.583 14:23:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.583 ************************************ 00:09:02.583 END TEST accel_compress_verify 00:09:02.583 ************************************ 00:09:02.583 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.583 14:23:09 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:02.583 14:23:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:02.583 14:23:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.583 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.583 ************************************ 00:09:02.583 START TEST accel_wrong_workload 00:09:02.583 ************************************ 00:09:02.583 14:23:09 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:09:02.583 14:23:09 -- common/autotest_common.sh@650 -- # local es=0 00:09:02.583 14:23:09 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:02.583 14:23:09 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:02.583 14:23:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.583 14:23:09 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:02.583 14:23:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.583 14:23:09 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:09:02.583 14:23:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:02.583 14:23:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.583 14:23:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.583 14:23:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.583 14:23:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.583 14:23:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.583 14:23:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.583 14:23:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.583 14:23:09 -- accel/accel.sh@42 -- # jq -r . 00:09:02.583 Unsupported workload type: foobar 00:09:02.583 [2024-12-06 14:23:09.415951] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:02.583 accel_perf options: 00:09:02.583 [-h help message] 00:09:02.583 [-q queue depth per core] 00:09:02.583 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:02.583 [-T number of threads per core 00:09:02.583 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:02.583 [-t time in seconds] 00:09:02.583 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:02.583 [ dif_verify, , dif_generate, dif_generate_copy 00:09:02.583 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:02.583 [-l for compress/decompress workloads, name of uncompressed input file 00:09:02.583 [-S for crc32c workload, use this seed value (default 0) 00:09:02.584 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:02.584 [-f for fill workload, use this BYTE value (default 255) 00:09:02.584 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:02.584 [-y verify result if this switch is on] 00:09:02.584 [-a tasks to allocate per core (default: same value as -q)] 00:09:02.584 Can be used to spread operations across a wider range of memory. 00:09:02.584 14:23:09 -- common/autotest_common.sh@653 -- # es=1 00:09:02.584 14:23:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.584 14:23:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:02.584 14:23:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.584 00:09:02.584 real 0m0.035s 00:09:02.584 user 0m0.020s 00:09:02.584 sys 0m0.014s 00:09:02.584 14:23:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.584 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.584 ************************************ 00:09:02.584 END TEST accel_wrong_workload 00:09:02.584 ************************************ 00:09:02.584 14:23:09 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:02.584 14:23:09 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:02.584 14:23:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.584 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.584 ************************************ 00:09:02.584 START TEST accel_negative_buffers 00:09:02.584 ************************************ 00:09:02.584 14:23:09 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:02.584 14:23:09 -- common/autotest_common.sh@650 -- # local es=0 00:09:02.584 14:23:09 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:02.584 14:23:09 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:02.584 14:23:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.584 14:23:09 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:02.584 14:23:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.584 14:23:09 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:09:02.584 14:23:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:02.584 14:23:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.584 14:23:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.584 14:23:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.584 14:23:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.584 14:23:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.584 14:23:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.584 14:23:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.584 14:23:09 -- accel/accel.sh@42 -- # jq -r . 00:09:02.584 -x option must be non-negative. 00:09:02.584 [2024-12-06 14:23:09.505583] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:02.584 accel_perf options: 00:09:02.584 [-h help message] 00:09:02.584 [-q queue depth per core] 00:09:02.584 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:02.584 [-T number of threads per core 00:09:02.584 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:02.584 [-t time in seconds] 00:09:02.584 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:02.584 [ dif_verify, , dif_generate, dif_generate_copy 00:09:02.584 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:02.584 [-l for compress/decompress workloads, name of uncompressed input file 00:09:02.584 [-S for crc32c workload, use this seed value (default 0) 00:09:02.584 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:02.584 [-f for fill workload, use this BYTE value (default 255) 00:09:02.584 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:02.584 [-y verify result if this switch is on] 00:09:02.584 [-a tasks to allocate per core (default: same value as -q)] 00:09:02.584 Can be used to spread operations across a wider range of memory. 00:09:02.584 14:23:09 -- common/autotest_common.sh@653 -- # es=1 00:09:02.584 14:23:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.584 14:23:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:02.584 14:23:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.584 00:09:02.584 real 0m0.036s 00:09:02.584 user 0m0.020s 00:09:02.584 sys 0m0.015s 00:09:02.584 ************************************ 00:09:02.584 END TEST accel_negative_buffers 00:09:02.584 ************************************ 00:09:02.584 14:23:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.584 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 14:23:09 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:02.842 14:23:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:02.842 14:23:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.842 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 ************************************ 00:09:02.842 START TEST accel_crc32c 00:09:02.842 ************************************ 00:09:02.842 14:23:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:02.842 14:23:09 -- accel/accel.sh@16 -- # local accel_opc 00:09:02.842 14:23:09 -- accel/accel.sh@17 -- # local accel_module 00:09:02.842 14:23:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:02.842 14:23:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:02.842 14:23:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.842 14:23:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.842 14:23:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.842 14:23:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.842 14:23:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.842 14:23:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.842 14:23:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.842 14:23:09 -- accel/accel.sh@42 -- # jq -r . 00:09:02.842 [2024-12-06 14:23:09.588569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.842 [2024-12-06 14:23:09.588881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58782 ] 00:09:02.842 [2024-12-06 14:23:09.729333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.100 [2024-12-06 14:23:09.885884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.500 14:23:11 -- accel/accel.sh@18 -- # out=' 00:09:04.500 SPDK Configuration: 00:09:04.500 Core mask: 0x1 00:09:04.500 00:09:04.500 Accel Perf Configuration: 00:09:04.500 Workload Type: crc32c 00:09:04.500 CRC-32C seed: 32 00:09:04.500 Transfer size: 4096 bytes 00:09:04.500 Vector count 1 00:09:04.500 Module: software 00:09:04.500 Queue depth: 32 00:09:04.500 Allocate depth: 32 00:09:04.500 # threads/core: 1 00:09:04.500 Run time: 1 seconds 00:09:04.500 Verify: Yes 00:09:04.500 00:09:04.500 Running for 1 seconds... 00:09:04.500 00:09:04.500 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:04.500 ------------------------------------------------------------------------------------ 00:09:04.500 0,0 445568/s 1740 MiB/s 0 0 00:09:04.500 ==================================================================================== 00:09:04.500 Total 445568/s 1740 MiB/s 0 0' 00:09:04.500 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:04.500 14:23:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:04.500 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:04.500 14:23:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:04.500 14:23:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:04.500 14:23:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:04.500 14:23:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.500 14:23:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.500 14:23:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:04.500 14:23:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:04.500 14:23:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:04.500 14:23:11 -- accel/accel.sh@42 -- # jq -r . 00:09:04.500 [2024-12-06 14:23:11.325870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.500 [2024-12-06 14:23:11.326003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:09:04.758 [2024-12-06 14:23:11.469495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.758 [2024-12-06 14:23:11.647231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=0x1 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=crc32c 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=32 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=software 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@23 -- # accel_module=software 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=32 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=32 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=1 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val=Yes 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:05.037 14:23:11 -- accel/accel.sh@21 -- # val= 00:09:05.037 14:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # IFS=: 00:09:05.037 14:23:11 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@21 -- # val= 00:09:06.413 14:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # IFS=: 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@21 -- # val= 00:09:06.413 14:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # IFS=: 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@21 -- # val= 00:09:06.413 14:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # IFS=: 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@21 -- # val= 00:09:06.413 14:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # IFS=: 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@21 -- # val= 00:09:06.413 14:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # IFS=: 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@21 -- # val= 00:09:06.413 14:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # IFS=: 00:09:06.413 14:23:13 -- accel/accel.sh@20 -- # read -r var val 00:09:06.413 14:23:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:06.413 14:23:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:06.413 14:23:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.413 00:09:06.413 real 0m3.464s 00:09:06.413 user 0m2.933s 00:09:06.413 sys 0m0.322s 00:09:06.413 14:23:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:06.413 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:09:06.413 ************************************ 00:09:06.413 END TEST accel_crc32c 00:09:06.413 ************************************ 00:09:06.413 14:23:13 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:06.413 14:23:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:06.413 14:23:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.413 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:09:06.413 ************************************ 00:09:06.413 START TEST accel_crc32c_C2 00:09:06.413 ************************************ 00:09:06.413 14:23:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:06.413 14:23:13 -- accel/accel.sh@16 -- # local accel_opc 00:09:06.413 14:23:13 -- accel/accel.sh@17 -- # local accel_module 00:09:06.413 14:23:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:06.413 14:23:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:06.413 14:23:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:06.413 14:23:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:06.413 14:23:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.413 14:23:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.413 14:23:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:06.413 14:23:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:06.413 14:23:13 -- accel/accel.sh@41 -- # local IFS=, 00:09:06.413 14:23:13 -- accel/accel.sh@42 -- # jq -r . 00:09:06.413 [2024-12-06 14:23:13.106458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.413 [2024-12-06 14:23:13.106605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58842 ] 00:09:06.413 [2024-12-06 14:23:13.249722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.672 [2024-12-06 14:23:13.409302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.050 14:23:14 -- accel/accel.sh@18 -- # out=' 00:09:08.050 SPDK Configuration: 00:09:08.050 Core mask: 0x1 00:09:08.050 00:09:08.050 Accel Perf Configuration: 00:09:08.050 Workload Type: crc32c 00:09:08.050 CRC-32C seed: 0 00:09:08.050 Transfer size: 4096 bytes 00:09:08.050 Vector count 2 00:09:08.050 Module: software 00:09:08.050 Queue depth: 32 00:09:08.050 Allocate depth: 32 00:09:08.050 # threads/core: 1 00:09:08.050 Run time: 1 seconds 00:09:08.050 Verify: Yes 00:09:08.050 00:09:08.050 Running for 1 seconds... 00:09:08.050 00:09:08.050 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:08.050 ------------------------------------------------------------------------------------ 00:09:08.050 0,0 342976/s 2679 MiB/s 0 0 00:09:08.050 ==================================================================================== 00:09:08.050 Total 342976/s 1339 MiB/s 0 0' 00:09:08.050 14:23:14 -- accel/accel.sh@20 -- # IFS=: 00:09:08.050 14:23:14 -- accel/accel.sh@20 -- # read -r var val 00:09:08.050 14:23:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:08.050 14:23:14 -- accel/accel.sh@12 -- # build_accel_config 00:09:08.050 14:23:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:08.050 14:23:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:08.050 14:23:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.050 14:23:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.050 14:23:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:08.050 14:23:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:08.050 14:23:14 -- accel/accel.sh@41 -- # local IFS=, 00:09:08.050 14:23:14 -- accel/accel.sh@42 -- # jq -r . 00:09:08.050 [2024-12-06 14:23:14.814363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.050 [2024-12-06 14:23:14.814487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58867 ] 00:09:08.050 [2024-12-06 14:23:14.950230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.311 [2024-12-06 14:23:15.120139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.311 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.311 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.311 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.311 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.311 14:23:15 -- accel/accel.sh@21 -- # val=0x1 00:09:08.311 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.311 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.311 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.311 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.311 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.311 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=crc32c 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=0 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=software 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@23 -- # accel_module=software 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=32 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=32 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=1 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val=Yes 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.312 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.312 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.312 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:08.313 14:23:15 -- accel/accel.sh@21 -- # val= 00:09:08.313 14:23:15 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.313 14:23:15 -- accel/accel.sh@20 -- # IFS=: 00:09:08.313 14:23:15 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 14:23:16 -- accel/accel.sh@21 -- # val= 00:09:09.690 14:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # IFS=: 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 14:23:16 -- accel/accel.sh@21 -- # val= 00:09:09.690 14:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # IFS=: 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 14:23:16 -- accel/accel.sh@21 -- # val= 00:09:09.690 14:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # IFS=: 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 14:23:16 -- accel/accel.sh@21 -- # val= 00:09:09.690 14:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # IFS=: 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 14:23:16 -- accel/accel.sh@21 -- # val= 00:09:09.690 14:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # IFS=: 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 ************************************ 00:09:09.690 END TEST accel_crc32c_C2 00:09:09.690 ************************************ 00:09:09.690 14:23:16 -- accel/accel.sh@21 -- # val= 00:09:09.690 14:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # IFS=: 00:09:09.690 14:23:16 -- accel/accel.sh@20 -- # read -r var val 00:09:09.690 14:23:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:09.690 14:23:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:09.690 14:23:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:09.690 00:09:09.690 real 0m3.400s 00:09:09.690 user 0m2.878s 00:09:09.690 sys 0m0.312s 00:09:09.690 14:23:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.690 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:09.690 14:23:16 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:09.690 14:23:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:09.690 14:23:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.690 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:09.690 ************************************ 00:09:09.690 START TEST accel_copy 00:09:09.691 ************************************ 00:09:09.691 14:23:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:09:09.691 14:23:16 -- accel/accel.sh@16 -- # local accel_opc 00:09:09.691 14:23:16 -- accel/accel.sh@17 -- # local accel_module 00:09:09.691 14:23:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:09.691 14:23:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:09.691 14:23:16 -- accel/accel.sh@12 -- # build_accel_config 00:09:09.691 14:23:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:09.691 14:23:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.691 14:23:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.691 14:23:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:09.691 14:23:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:09.691 14:23:16 -- accel/accel.sh@41 -- # local IFS=, 00:09:09.691 14:23:16 -- accel/accel.sh@42 -- # jq -r . 00:09:09.691 [2024-12-06 14:23:16.556047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.691 [2024-12-06 14:23:16.556135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:09:09.950 [2024-12-06 14:23:16.689307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.950 [2024-12-06 14:23:16.822048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.328 14:23:18 -- accel/accel.sh@18 -- # out=' 00:09:11.328 SPDK Configuration: 00:09:11.328 Core mask: 0x1 00:09:11.328 00:09:11.328 Accel Perf Configuration: 00:09:11.328 Workload Type: copy 00:09:11.328 Transfer size: 4096 bytes 00:09:11.328 Vector count 1 00:09:11.328 Module: software 00:09:11.328 Queue depth: 32 00:09:11.328 Allocate depth: 32 00:09:11.328 # threads/core: 1 00:09:11.328 Run time: 1 seconds 00:09:11.328 Verify: Yes 00:09:11.328 00:09:11.328 Running for 1 seconds... 00:09:11.328 00:09:11.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:11.328 ------------------------------------------------------------------------------------ 00:09:11.328 0,0 318304/s 1243 MiB/s 0 0 00:09:11.328 ==================================================================================== 00:09:11.328 Total 318304/s 1243 MiB/s 0 0' 00:09:11.328 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.328 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.328 14:23:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:11.328 14:23:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:11.328 14:23:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:11.328 14:23:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:11.328 14:23:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.328 14:23:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.328 14:23:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:11.328 14:23:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:11.328 14:23:18 -- accel/accel.sh@41 -- # local IFS=, 00:09:11.328 14:23:18 -- accel/accel.sh@42 -- # jq -r . 00:09:11.328 [2024-12-06 14:23:18.198828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.328 [2024-12-06 14:23:18.198930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:09:11.587 [2024-12-06 14:23:18.334453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.587 [2024-12-06 14:23:18.495054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=0x1 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=copy 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=software 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@23 -- # accel_module=software 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=32 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=32 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=1 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val=Yes 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:11.845 14:23:18 -- accel/accel.sh@21 -- # val= 00:09:11.845 14:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # IFS=: 00:09:11.845 14:23:18 -- accel/accel.sh@20 -- # read -r var val 00:09:13.219 14:23:19 -- accel/accel.sh@21 -- # val= 00:09:13.219 14:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.219 14:23:19 -- accel/accel.sh@20 -- # IFS=: 00:09:13.219 14:23:19 -- accel/accel.sh@20 -- # read -r var val 00:09:13.220 14:23:19 -- accel/accel.sh@21 -- # val= 00:09:13.220 14:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # IFS=: 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # read -r var val 00:09:13.220 14:23:19 -- accel/accel.sh@21 -- # val= 00:09:13.220 14:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # IFS=: 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # read -r var val 00:09:13.220 14:23:19 -- accel/accel.sh@21 -- # val= 00:09:13.220 14:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # IFS=: 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # read -r var val 00:09:13.220 14:23:19 -- accel/accel.sh@21 -- # val= 00:09:13.220 14:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # IFS=: 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # read -r var val 00:09:13.220 14:23:19 -- accel/accel.sh@21 -- # val= 00:09:13.220 14:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # IFS=: 00:09:13.220 14:23:19 -- accel/accel.sh@20 -- # read -r var val 00:09:13.220 14:23:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:13.220 14:23:19 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:13.220 14:23:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.220 00:09:13.220 real 0m3.329s 00:09:13.220 user 0m2.814s 00:09:13.220 sys 0m0.306s 00:09:13.220 14:23:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.220 14:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:13.220 ************************************ 00:09:13.220 END TEST accel_copy 00:09:13.220 ************************************ 00:09:13.220 14:23:19 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:13.220 14:23:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:13.220 14:23:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.220 14:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:13.220 ************************************ 00:09:13.220 START TEST accel_fill 00:09:13.220 ************************************ 00:09:13.220 14:23:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:13.220 14:23:19 -- accel/accel.sh@16 -- # local accel_opc 00:09:13.220 14:23:19 -- accel/accel.sh@17 -- # local accel_module 00:09:13.220 14:23:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:13.220 14:23:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:13.220 14:23:19 -- accel/accel.sh@12 -- # build_accel_config 00:09:13.220 14:23:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:13.220 14:23:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.220 14:23:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.220 14:23:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:13.220 14:23:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:13.220 14:23:19 -- accel/accel.sh@41 -- # local IFS=, 00:09:13.220 14:23:19 -- accel/accel.sh@42 -- # jq -r . 00:09:13.220 [2024-12-06 14:23:19.941040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.220 [2024-12-06 14:23:19.941162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:09:13.220 [2024-12-06 14:23:20.077637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.478 [2024-12-06 14:23:20.240827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.850 14:23:21 -- accel/accel.sh@18 -- # out=' 00:09:14.851 SPDK Configuration: 00:09:14.851 Core mask: 0x1 00:09:14.851 00:09:14.851 Accel Perf Configuration: 00:09:14.851 Workload Type: fill 00:09:14.851 Fill pattern: 0x80 00:09:14.851 Transfer size: 4096 bytes 00:09:14.851 Vector count 1 00:09:14.851 Module: software 00:09:14.851 Queue depth: 64 00:09:14.851 Allocate depth: 64 00:09:14.851 # threads/core: 1 00:09:14.851 Run time: 1 seconds 00:09:14.851 Verify: Yes 00:09:14.851 00:09:14.851 Running for 1 seconds... 00:09:14.851 00:09:14.851 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:14.851 ------------------------------------------------------------------------------------ 00:09:14.851 0,0 419392/s 1638 MiB/s 0 0 00:09:14.851 ==================================================================================== 00:09:14.851 Total 419392/s 1638 MiB/s 0 0' 00:09:14.851 14:23:21 -- accel/accel.sh@20 -- # IFS=: 00:09:14.851 14:23:21 -- accel/accel.sh@20 -- # read -r var val 00:09:14.851 14:23:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:14.851 14:23:21 -- accel/accel.sh@12 -- # build_accel_config 00:09:14.851 14:23:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:14.851 14:23:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:14.851 14:23:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.851 14:23:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.851 14:23:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:14.851 14:23:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:14.851 14:23:21 -- accel/accel.sh@41 -- # local IFS=, 00:09:14.851 14:23:21 -- accel/accel.sh@42 -- # jq -r . 00:09:14.851 [2024-12-06 14:23:21.685682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.851 [2024-12-06 14:23:21.685803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:09:15.109 [2024-12-06 14:23:21.822498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.109 [2024-12-06 14:23:21.995679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=0x1 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=fill 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@24 -- # accel_opc=fill 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=0x80 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=software 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@23 -- # accel_module=software 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=64 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=64 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=1 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val=Yes 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:15.367 14:23:22 -- accel/accel.sh@21 -- # val= 00:09:15.367 14:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # IFS=: 00:09:15.367 14:23:22 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@21 -- # val= 00:09:16.739 14:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # IFS=: 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@21 -- # val= 00:09:16.739 14:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # IFS=: 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@21 -- # val= 00:09:16.739 14:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # IFS=: 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@21 -- # val= 00:09:16.739 14:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # IFS=: 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@21 -- # val= 00:09:16.739 14:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # IFS=: 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@21 -- # val= 00:09:16.739 14:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # IFS=: 00:09:16.739 14:23:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.739 14:23:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:16.739 14:23:23 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:09:16.739 14:23:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:16.739 00:09:16.739 real 0m3.467s 00:09:16.739 user 0m2.910s 00:09:16.739 sys 0m0.349s 00:09:16.739 14:23:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.739 14:23:23 -- common/autotest_common.sh@10 -- # set +x 00:09:16.739 ************************************ 00:09:16.739 END TEST accel_fill 00:09:16.739 ************************************ 00:09:16.739 14:23:23 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:16.739 14:23:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:16.739 14:23:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.739 14:23:23 -- common/autotest_common.sh@10 -- # set +x 00:09:16.739 ************************************ 00:09:16.739 START TEST accel_copy_crc32c 00:09:16.739 ************************************ 00:09:16.739 14:23:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:09:16.739 14:23:23 -- accel/accel.sh@16 -- # local accel_opc 00:09:16.739 14:23:23 -- accel/accel.sh@17 -- # local accel_module 00:09:16.739 14:23:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:16.739 14:23:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:16.739 14:23:23 -- accel/accel.sh@12 -- # build_accel_config 00:09:16.739 14:23:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:16.739 14:23:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.739 14:23:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.739 14:23:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:16.739 14:23:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:16.739 14:23:23 -- accel/accel.sh@41 -- # local IFS=, 00:09:16.739 14:23:23 -- accel/accel.sh@42 -- # jq -r . 00:09:16.739 [2024-12-06 14:23:23.462021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.739 [2024-12-06 14:23:23.462139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59017 ] 00:09:16.739 [2024-12-06 14:23:23.593904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.996 [2024-12-06 14:23:23.753232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.366 14:23:25 -- accel/accel.sh@18 -- # out=' 00:09:18.366 SPDK Configuration: 00:09:18.366 Core mask: 0x1 00:09:18.366 00:09:18.366 Accel Perf Configuration: 00:09:18.367 Workload Type: copy_crc32c 00:09:18.367 CRC-32C seed: 0 00:09:18.367 Vector size: 4096 bytes 00:09:18.367 Transfer size: 4096 bytes 00:09:18.367 Vector count 1 00:09:18.367 Module: software 00:09:18.367 Queue depth: 32 00:09:18.367 Allocate depth: 32 00:09:18.367 # threads/core: 1 00:09:18.367 Run time: 1 seconds 00:09:18.367 Verify: Yes 00:09:18.367 00:09:18.367 Running for 1 seconds... 00:09:18.367 00:09:18.367 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:18.367 ------------------------------------------------------------------------------------ 00:09:18.367 0,0 250752/s 979 MiB/s 0 0 00:09:18.367 ==================================================================================== 00:09:18.367 Total 250752/s 979 MiB/s 0 0' 00:09:18.367 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.367 14:23:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:18.367 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.367 14:23:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:18.367 14:23:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:18.367 14:23:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:18.367 14:23:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.367 14:23:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.367 14:23:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:18.367 14:23:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:18.367 14:23:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:18.367 14:23:25 -- accel/accel.sh@42 -- # jq -r . 00:09:18.367 [2024-12-06 14:23:25.179665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:18.367 [2024-12-06 14:23:25.179783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:09:18.367 [2024-12-06 14:23:25.314689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.623 [2024-12-06 14:23:25.475572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.623 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.623 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.623 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.623 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.623 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.623 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.623 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.623 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.623 14:23:25 -- accel/accel.sh@21 -- # val=0x1 00:09:18.623 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=0 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=software 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@23 -- # accel_module=software 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=32 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=32 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=1 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val=Yes 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:18.624 14:23:25 -- accel/accel.sh@21 -- # val= 00:09:18.624 14:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # IFS=: 00:09:18.624 14:23:25 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@21 -- # val= 00:09:19.996 14:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # IFS=: 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@21 -- # val= 00:09:19.996 14:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # IFS=: 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@21 -- # val= 00:09:19.996 14:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # IFS=: 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@21 -- # val= 00:09:19.996 14:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # IFS=: 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@21 -- # val= 00:09:19.996 14:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # IFS=: 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@21 -- # val= 00:09:19.996 14:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # IFS=: 00:09:19.996 14:23:26 -- accel/accel.sh@20 -- # read -r var val 00:09:19.996 14:23:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:19.996 14:23:26 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:19.996 14:23:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:19.996 00:09:19.996 real 0m3.425s 00:09:19.996 user 0m2.879s 00:09:19.996 sys 0m0.330s 00:09:19.996 14:23:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.996 14:23:26 -- common/autotest_common.sh@10 -- # set +x 00:09:19.996 ************************************ 00:09:19.996 END TEST accel_copy_crc32c 00:09:19.996 ************************************ 00:09:19.996 14:23:26 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:19.996 14:23:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:19.996 14:23:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.996 14:23:26 -- common/autotest_common.sh@10 -- # set +x 00:09:19.996 ************************************ 00:09:19.996 START TEST accel_copy_crc32c_C2 00:09:19.996 ************************************ 00:09:19.996 14:23:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:19.996 14:23:26 -- accel/accel.sh@16 -- # local accel_opc 00:09:19.996 14:23:26 -- accel/accel.sh@17 -- # local accel_module 00:09:19.996 14:23:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:19.996 14:23:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:19.996 14:23:26 -- accel/accel.sh@12 -- # build_accel_config 00:09:19.996 14:23:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:19.996 14:23:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:19.996 14:23:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:19.996 14:23:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:19.996 14:23:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:19.996 14:23:26 -- accel/accel.sh@41 -- # local IFS=, 00:09:19.996 14:23:26 -- accel/accel.sh@42 -- # jq -r . 00:09:19.996 [2024-12-06 14:23:26.940057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:19.996 [2024-12-06 14:23:26.940154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:09:20.255 [2024-12-06 14:23:27.076131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.513 [2024-12-06 14:23:27.244935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.888 14:23:28 -- accel/accel.sh@18 -- # out=' 00:09:21.888 SPDK Configuration: 00:09:21.888 Core mask: 0x1 00:09:21.888 00:09:21.888 Accel Perf Configuration: 00:09:21.888 Workload Type: copy_crc32c 00:09:21.888 CRC-32C seed: 0 00:09:21.888 Vector size: 4096 bytes 00:09:21.888 Transfer size: 8192 bytes 00:09:21.888 Vector count 2 00:09:21.888 Module: software 00:09:21.888 Queue depth: 32 00:09:21.888 Allocate depth: 32 00:09:21.888 # threads/core: 1 00:09:21.889 Run time: 1 seconds 00:09:21.889 Verify: Yes 00:09:21.889 00:09:21.889 Running for 1 seconds... 00:09:21.889 00:09:21.889 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:21.889 ------------------------------------------------------------------------------------ 00:09:21.889 0,0 166528/s 1301 MiB/s 0 0 00:09:21.889 ==================================================================================== 00:09:21.889 Total 166528/s 650 MiB/s 0 0' 00:09:21.889 14:23:28 -- accel/accel.sh@20 -- # IFS=: 00:09:21.889 14:23:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:21.889 14:23:28 -- accel/accel.sh@20 -- # read -r var val 00:09:21.889 14:23:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:21.889 14:23:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.889 14:23:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.889 14:23:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.889 14:23:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.889 14:23:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.889 14:23:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.889 14:23:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.889 14:23:28 -- accel/accel.sh@42 -- # jq -r . 00:09:21.889 [2024-12-06 14:23:28.671525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:21.889 [2024-12-06 14:23:28.671652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59096 ] 00:09:21.889 [2024-12-06 14:23:28.802512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.148 [2024-12-06 14:23:28.982466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=0x1 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=0 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val='8192 bytes' 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=software 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@23 -- # accel_module=software 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=32 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=32 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val=1 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.148 14:23:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:22.148 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.148 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.149 14:23:29 -- accel/accel.sh@21 -- # val=Yes 00:09:22.149 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.149 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.149 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.149 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.149 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.149 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.149 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:22.149 14:23:29 -- accel/accel.sh@21 -- # val= 00:09:22.149 14:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.149 14:23:29 -- accel/accel.sh@20 -- # IFS=: 00:09:22.149 14:23:29 -- accel/accel.sh@20 -- # read -r var val 00:09:23.526 14:23:30 -- accel/accel.sh@21 -- # val= 00:09:23.526 14:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # IFS=: 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # read -r var val 00:09:23.526 14:23:30 -- accel/accel.sh@21 -- # val= 00:09:23.526 14:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # IFS=: 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # read -r var val 00:09:23.526 14:23:30 -- accel/accel.sh@21 -- # val= 00:09:23.526 14:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # IFS=: 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # read -r var val 00:09:23.526 14:23:30 -- accel/accel.sh@21 -- # val= 00:09:23.526 14:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # IFS=: 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # read -r var val 00:09:23.526 14:23:30 -- accel/accel.sh@21 -- # val= 00:09:23.526 14:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # IFS=: 00:09:23.526 14:23:30 -- accel/accel.sh@20 -- # read -r var val 00:09:23.526 14:23:30 -- accel/accel.sh@21 -- # val= 00:09:23.527 14:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:23.527 14:23:30 -- accel/accel.sh@20 -- # IFS=: 00:09:23.527 14:23:30 -- accel/accel.sh@20 -- # read -r var val 00:09:23.527 14:23:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:23.527 14:23:30 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:23.527 14:23:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:23.527 00:09:23.527 real 0m3.508s 00:09:23.527 user 0m1.459s 00:09:23.527 sys 0m0.184s 00:09:23.527 14:23:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.527 ************************************ 00:09:23.527 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:23.527 END TEST accel_copy_crc32c_C2 00:09:23.527 ************************************ 00:09:23.527 14:23:30 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:23.527 14:23:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:23.527 14:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.527 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:23.527 ************************************ 00:09:23.527 START TEST accel_dualcast 00:09:23.527 ************************************ 00:09:23.527 14:23:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:09:23.527 14:23:30 -- accel/accel.sh@16 -- # local accel_opc 00:09:23.527 14:23:30 -- accel/accel.sh@17 -- # local accel_module 00:09:23.527 14:23:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:09:23.527 14:23:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:23.527 14:23:30 -- accel/accel.sh@12 -- # build_accel_config 00:09:23.527 14:23:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:23.527 14:23:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.527 14:23:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.527 14:23:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:23.527 14:23:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:23.527 14:23:30 -- accel/accel.sh@41 -- # local IFS=, 00:09:23.527 14:23:30 -- accel/accel.sh@42 -- # jq -r . 00:09:23.785 [2024-12-06 14:23:30.504235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.785 [2024-12-06 14:23:30.504338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:09:23.785 [2024-12-06 14:23:30.638160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.043 [2024-12-06 14:23:30.797220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.419 14:23:32 -- accel/accel.sh@18 -- # out=' 00:09:25.419 SPDK Configuration: 00:09:25.419 Core mask: 0x1 00:09:25.419 00:09:25.419 Accel Perf Configuration: 00:09:25.419 Workload Type: dualcast 00:09:25.419 Transfer size: 4096 bytes 00:09:25.419 Vector count 1 00:09:25.419 Module: software 00:09:25.419 Queue depth: 32 00:09:25.419 Allocate depth: 32 00:09:25.419 # threads/core: 1 00:09:25.419 Run time: 1 seconds 00:09:25.419 Verify: Yes 00:09:25.419 00:09:25.419 Running for 1 seconds... 00:09:25.419 00:09:25.419 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:25.419 ------------------------------------------------------------------------------------ 00:09:25.419 0,0 331104/s 1293 MiB/s 0 0 00:09:25.419 ==================================================================================== 00:09:25.419 Total 331104/s 1293 MiB/s 0 0' 00:09:25.419 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.419 14:23:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:25.419 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.419 14:23:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:25.419 14:23:32 -- accel/accel.sh@12 -- # build_accel_config 00:09:25.419 14:23:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:25.419 14:23:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:25.419 14:23:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.419 14:23:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:25.419 14:23:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:25.419 14:23:32 -- accel/accel.sh@41 -- # local IFS=, 00:09:25.419 14:23:32 -- accel/accel.sh@42 -- # jq -r . 00:09:25.419 [2024-12-06 14:23:32.209995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.419 [2024-12-06 14:23:32.210157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:09:25.419 [2024-12-06 14:23:32.342475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.678 [2024-12-06 14:23:32.500548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=0x1 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=dualcast 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=software 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@23 -- # accel_module=software 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=32 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=32 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=1 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val=Yes 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:25.678 14:23:32 -- accel/accel.sh@21 -- # val= 00:09:25.678 14:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # IFS=: 00:09:25.678 14:23:32 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@21 -- # val= 00:09:27.055 14:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # IFS=: 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@21 -- # val= 00:09:27.055 14:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # IFS=: 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@21 -- # val= 00:09:27.055 14:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # IFS=: 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@21 -- # val= 00:09:27.055 14:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # IFS=: 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@21 -- # val= 00:09:27.055 14:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # IFS=: 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@21 -- # val= 00:09:27.055 14:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # IFS=: 00:09:27.055 14:23:33 -- accel/accel.sh@20 -- # read -r var val 00:09:27.055 14:23:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:27.055 14:23:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:09:27.055 14:23:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:27.055 00:09:27.055 real 0m3.429s 00:09:27.055 user 0m2.907s 00:09:27.055 sys 0m0.312s 00:09:27.055 14:23:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:27.055 ************************************ 00:09:27.055 END TEST accel_dualcast 00:09:27.055 ************************************ 00:09:27.055 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.055 14:23:33 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:27.055 14:23:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:27.055 14:23:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.055 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.055 ************************************ 00:09:27.055 START TEST accel_compare 00:09:27.055 ************************************ 00:09:27.055 14:23:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:09:27.055 14:23:33 -- accel/accel.sh@16 -- # local accel_opc 00:09:27.055 14:23:33 -- accel/accel.sh@17 -- # local accel_module 00:09:27.055 14:23:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:09:27.055 14:23:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:27.055 14:23:33 -- accel/accel.sh@12 -- # build_accel_config 00:09:27.055 14:23:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:27.055 14:23:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:27.055 14:23:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:27.055 14:23:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:27.055 14:23:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:27.055 14:23:33 -- accel/accel.sh@41 -- # local IFS=, 00:09:27.055 14:23:33 -- accel/accel.sh@42 -- # jq -r . 00:09:27.055 [2024-12-06 14:23:33.985503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:27.055 [2024-12-06 14:23:33.985598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59193 ] 00:09:27.316 [2024-12-06 14:23:34.118706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.316 [2024-12-06 14:23:34.276771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.222 14:23:35 -- accel/accel.sh@18 -- # out=' 00:09:29.222 SPDK Configuration: 00:09:29.222 Core mask: 0x1 00:09:29.222 00:09:29.222 Accel Perf Configuration: 00:09:29.222 Workload Type: compare 00:09:29.222 Transfer size: 4096 bytes 00:09:29.222 Vector count 1 00:09:29.222 Module: software 00:09:29.222 Queue depth: 32 00:09:29.222 Allocate depth: 32 00:09:29.222 # threads/core: 1 00:09:29.222 Run time: 1 seconds 00:09:29.222 Verify: Yes 00:09:29.222 00:09:29.222 Running for 1 seconds... 00:09:29.222 00:09:29.222 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:29.222 ------------------------------------------------------------------------------------ 00:09:29.222 0,0 451040/s 1761 MiB/s 0 0 00:09:29.222 ==================================================================================== 00:09:29.222 Total 451040/s 1761 MiB/s 0 0' 00:09:29.222 14:23:35 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:29.222 14:23:35 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:35 -- accel/accel.sh@12 -- # build_accel_config 00:09:29.222 14:23:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:29.222 14:23:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:29.222 14:23:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.222 14:23:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.222 14:23:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:29.222 14:23:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:29.222 14:23:35 -- accel/accel.sh@41 -- # local IFS=, 00:09:29.222 14:23:35 -- accel/accel.sh@42 -- # jq -r . 00:09:29.222 [2024-12-06 14:23:35.711054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:29.222 [2024-12-06 14:23:35.711190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59218 ] 00:09:29.222 [2024-12-06 14:23:35.846055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.222 [2024-12-06 14:23:36.014158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=0x1 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=compare 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@24 -- # accel_opc=compare 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=software 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@23 -- # accel_module=software 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=32 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=32 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=1 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val=Yes 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:29.222 14:23:36 -- accel/accel.sh@21 -- # val= 00:09:29.222 14:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # IFS=: 00:09:29.222 14:23:36 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@21 -- # val= 00:09:30.598 14:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # IFS=: 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@21 -- # val= 00:09:30.598 14:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # IFS=: 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@21 -- # val= 00:09:30.598 14:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # IFS=: 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@21 -- # val= 00:09:30.598 14:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # IFS=: 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@21 -- # val= 00:09:30.598 14:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # IFS=: 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@21 -- # val= 00:09:30.598 14:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # IFS=: 00:09:30.598 14:23:37 -- accel/accel.sh@20 -- # read -r var val 00:09:30.598 14:23:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:30.598 14:23:37 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:09:30.598 14:23:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:30.598 00:09:30.598 real 0m3.441s 00:09:30.598 user 0m2.895s 00:09:30.598 sys 0m0.335s 00:09:30.598 14:23:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:30.598 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 ************************************ 00:09:30.598 END TEST accel_compare 00:09:30.598 ************************************ 00:09:30.598 14:23:37 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:30.598 14:23:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:30.598 14:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.598 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 ************************************ 00:09:30.598 START TEST accel_xor 00:09:30.598 ************************************ 00:09:30.598 14:23:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:09:30.598 14:23:37 -- accel/accel.sh@16 -- # local accel_opc 00:09:30.598 14:23:37 -- accel/accel.sh@17 -- # local accel_module 00:09:30.598 14:23:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:09:30.598 14:23:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:30.598 14:23:37 -- accel/accel.sh@12 -- # build_accel_config 00:09:30.598 14:23:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:30.598 14:23:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:30.598 14:23:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:30.598 14:23:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:30.598 14:23:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:30.598 14:23:37 -- accel/accel.sh@41 -- # local IFS=, 00:09:30.598 14:23:37 -- accel/accel.sh@42 -- # jq -r . 00:09:30.598 [2024-12-06 14:23:37.476981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:30.598 [2024-12-06 14:23:37.477116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:09:30.857 [2024-12-06 14:23:37.609170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.857 [2024-12-06 14:23:37.764840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.233 14:23:39 -- accel/accel.sh@18 -- # out=' 00:09:32.233 SPDK Configuration: 00:09:32.233 Core mask: 0x1 00:09:32.233 00:09:32.233 Accel Perf Configuration: 00:09:32.233 Workload Type: xor 00:09:32.233 Source buffers: 2 00:09:32.233 Transfer size: 4096 bytes 00:09:32.233 Vector count 1 00:09:32.233 Module: software 00:09:32.233 Queue depth: 32 00:09:32.233 Allocate depth: 32 00:09:32.233 # threads/core: 1 00:09:32.233 Run time: 1 seconds 00:09:32.233 Verify: Yes 00:09:32.233 00:09:32.233 Running for 1 seconds... 00:09:32.233 00:09:32.233 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:32.233 ------------------------------------------------------------------------------------ 00:09:32.233 0,0 212640/s 830 MiB/s 0 0 00:09:32.233 ==================================================================================== 00:09:32.233 Total 212640/s 830 MiB/s 0 0' 00:09:32.233 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.233 14:23:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:32.233 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.233 14:23:39 -- accel/accel.sh@12 -- # build_accel_config 00:09:32.233 14:23:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:32.233 14:23:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:32.234 14:23:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:32.234 14:23:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.234 14:23:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:32.234 14:23:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:32.234 14:23:39 -- accel/accel.sh@41 -- # local IFS=, 00:09:32.234 14:23:39 -- accel/accel.sh@42 -- # jq -r . 00:09:32.234 [2024-12-06 14:23:39.181857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:32.234 [2024-12-06 14:23:39.181963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:09:32.492 [2024-12-06 14:23:39.319450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.750 [2024-12-06 14:23:39.470610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val=0x1 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val=xor 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val=2 00:09:32.750 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.750 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.750 14:23:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val=software 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@23 -- # accel_module=software 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val=32 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val=32 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val=1 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val=Yes 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.751 14:23:39 -- accel/accel.sh@21 -- # val= 00:09:32.751 14:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # IFS=: 00:09:32.751 14:23:39 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@21 -- # val= 00:09:34.128 14:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # IFS=: 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@21 -- # val= 00:09:34.128 14:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # IFS=: 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@21 -- # val= 00:09:34.128 14:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # IFS=: 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@21 -- # val= 00:09:34.128 14:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # IFS=: 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@21 -- # val= 00:09:34.128 14:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # IFS=: 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@21 -- # val= 00:09:34.128 14:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # IFS=: 00:09:34.128 14:23:40 -- accel/accel.sh@20 -- # read -r var val 00:09:34.128 14:23:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:34.128 14:23:40 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:34.128 14:23:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:34.128 00:09:34.128 real 0m3.409s 00:09:34.128 user 0m2.869s 00:09:34.128 sys 0m0.329s 00:09:34.128 14:23:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.128 ************************************ 00:09:34.128 END TEST accel_xor 00:09:34.128 ************************************ 00:09:34.128 14:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 14:23:40 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:34.128 14:23:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:34.128 14:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.128 14:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 ************************************ 00:09:34.128 START TEST accel_xor 00:09:34.128 ************************************ 00:09:34.128 14:23:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:09:34.128 14:23:40 -- accel/accel.sh@16 -- # local accel_opc 00:09:34.128 14:23:40 -- accel/accel.sh@17 -- # local accel_module 00:09:34.128 14:23:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:09:34.128 14:23:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:34.128 14:23:40 -- accel/accel.sh@12 -- # build_accel_config 00:09:34.128 14:23:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:34.128 14:23:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:34.128 14:23:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.128 14:23:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:34.128 14:23:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:34.128 14:23:40 -- accel/accel.sh@41 -- # local IFS=, 00:09:34.128 14:23:40 -- accel/accel.sh@42 -- # jq -r . 00:09:34.128 [2024-12-06 14:23:40.945188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.128 [2024-12-06 14:23:40.945308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59312 ] 00:09:34.128 [2024-12-06 14:23:41.083275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.388 [2024-12-06 14:23:41.231899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.812 14:23:42 -- accel/accel.sh@18 -- # out=' 00:09:35.812 SPDK Configuration: 00:09:35.812 Core mask: 0x1 00:09:35.812 00:09:35.812 Accel Perf Configuration: 00:09:35.812 Workload Type: xor 00:09:35.812 Source buffers: 3 00:09:35.812 Transfer size: 4096 bytes 00:09:35.812 Vector count 1 00:09:35.812 Module: software 00:09:35.812 Queue depth: 32 00:09:35.812 Allocate depth: 32 00:09:35.812 # threads/core: 1 00:09:35.812 Run time: 1 seconds 00:09:35.813 Verify: Yes 00:09:35.813 00:09:35.813 Running for 1 seconds... 00:09:35.813 00:09:35.813 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:35.813 ------------------------------------------------------------------------------------ 00:09:35.813 0,0 204928/s 800 MiB/s 0 0 00:09:35.813 ==================================================================================== 00:09:35.813 Total 204928/s 800 MiB/s 0 0' 00:09:35.813 14:23:42 -- accel/accel.sh@20 -- # IFS=: 00:09:35.813 14:23:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:35.813 14:23:42 -- accel/accel.sh@20 -- # read -r var val 00:09:35.813 14:23:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:35.813 14:23:42 -- accel/accel.sh@12 -- # build_accel_config 00:09:35.813 14:23:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:35.813 14:23:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:35.813 14:23:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:35.813 14:23:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:35.813 14:23:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:35.813 14:23:42 -- accel/accel.sh@41 -- # local IFS=, 00:09:35.813 14:23:42 -- accel/accel.sh@42 -- # jq -r . 00:09:35.813 [2024-12-06 14:23:42.640183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:35.813 [2024-12-06 14:23:42.640722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59334 ] 00:09:35.813 [2024-12-06 14:23:42.778185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.072 [2024-12-06 14:23:42.926576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val=0x1 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val=xor 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val=3 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.072 14:23:43 -- accel/accel.sh@21 -- # val=software 00:09:36.072 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.072 14:23:43 -- accel/accel.sh@23 -- # accel_module=software 00:09:36.072 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val=32 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val=32 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val=1 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val=Yes 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:36.073 14:23:43 -- accel/accel.sh@21 -- # val= 00:09:36.073 14:23:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # IFS=: 00:09:36.073 14:23:43 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@21 -- # val= 00:09:37.449 14:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # IFS=: 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@21 -- # val= 00:09:37.449 14:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # IFS=: 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@21 -- # val= 00:09:37.449 14:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # IFS=: 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@21 -- # val= 00:09:37.449 14:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # IFS=: 00:09:37.449 ************************************ 00:09:37.449 END TEST accel_xor 00:09:37.449 ************************************ 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@21 -- # val= 00:09:37.449 14:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # IFS=: 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@21 -- # val= 00:09:37.449 14:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # IFS=: 00:09:37.449 14:23:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.449 14:23:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:37.449 14:23:44 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:37.449 14:23:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:37.450 00:09:37.450 real 0m3.388s 00:09:37.450 user 0m2.861s 00:09:37.450 sys 0m0.315s 00:09:37.450 14:23:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.450 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 14:23:44 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:37.450 14:23:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:37.450 14:23:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.450 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:09:37.450 ************************************ 00:09:37.450 START TEST accel_dif_verify 00:09:37.450 ************************************ 00:09:37.450 14:23:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:09:37.450 14:23:44 -- accel/accel.sh@16 -- # local accel_opc 00:09:37.450 14:23:44 -- accel/accel.sh@17 -- # local accel_module 00:09:37.450 14:23:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:37.450 14:23:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:37.450 14:23:44 -- accel/accel.sh@12 -- # build_accel_config 00:09:37.450 14:23:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.450 14:23:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.450 14:23:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.450 14:23:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.450 14:23:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.450 14:23:44 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.450 14:23:44 -- accel/accel.sh@42 -- # jq -r . 00:09:37.450 [2024-12-06 14:23:44.396122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:37.450 [2024-12-06 14:23:44.396226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59373 ] 00:09:37.709 [2024-12-06 14:23:44.536654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.968 [2024-12-06 14:23:44.683727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.343 14:23:46 -- accel/accel.sh@18 -- # out=' 00:09:39.343 SPDK Configuration: 00:09:39.343 Core mask: 0x1 00:09:39.343 00:09:39.343 Accel Perf Configuration: 00:09:39.343 Workload Type: dif_verify 00:09:39.343 Vector size: 4096 bytes 00:09:39.343 Transfer size: 4096 bytes 00:09:39.343 Block size: 512 bytes 00:09:39.343 Metadata size: 8 bytes 00:09:39.343 Vector count 1 00:09:39.343 Module: software 00:09:39.343 Queue depth: 32 00:09:39.343 Allocate depth: 32 00:09:39.343 # threads/core: 1 00:09:39.343 Run time: 1 seconds 00:09:39.343 Verify: No 00:09:39.343 00:09:39.343 Running for 1 seconds... 00:09:39.343 00:09:39.343 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:39.343 ------------------------------------------------------------------------------------ 00:09:39.343 0,0 102016/s 404 MiB/s 0 0 00:09:39.343 ==================================================================================== 00:09:39.343 Total 102016/s 398 MiB/s 0 0' 00:09:39.343 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.343 14:23:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:39.343 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.343 14:23:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:39.343 14:23:46 -- accel/accel.sh@12 -- # build_accel_config 00:09:39.343 14:23:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:39.343 14:23:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.343 14:23:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.343 14:23:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:39.344 14:23:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:39.344 14:23:46 -- accel/accel.sh@41 -- # local IFS=, 00:09:39.344 14:23:46 -- accel/accel.sh@42 -- # jq -r . 00:09:39.344 [2024-12-06 14:23:46.092889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.344 [2024-12-06 14:23:46.093673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59388 ] 00:09:39.344 [2024-12-06 14:23:46.242155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.603 [2024-12-06 14:23:46.402272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=0x1 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=dif_verify 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=software 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@23 -- # accel_module=software 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=32 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=32 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=1 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val=No 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:39.603 14:23:46 -- accel/accel.sh@21 -- # val= 00:09:39.603 14:23:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # IFS=: 00:09:39.603 14:23:46 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@21 -- # val= 00:09:41.007 14:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # IFS=: 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@21 -- # val= 00:09:41.007 14:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # IFS=: 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@21 -- # val= 00:09:41.007 14:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # IFS=: 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@21 -- # val= 00:09:41.007 14:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # IFS=: 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@21 -- # val= 00:09:41.007 14:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # IFS=: 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@21 -- # val= 00:09:41.007 14:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # IFS=: 00:09:41.007 ************************************ 00:09:41.007 END TEST accel_dif_verify 00:09:41.007 ************************************ 00:09:41.007 14:23:47 -- accel/accel.sh@20 -- # read -r var val 00:09:41.007 14:23:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:41.007 14:23:47 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:09:41.007 14:23:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:41.007 00:09:41.007 real 0m3.404s 00:09:41.007 user 0m2.869s 00:09:41.007 sys 0m0.326s 00:09:41.007 14:23:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.007 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:09:41.007 14:23:47 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:41.007 14:23:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:41.007 14:23:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.007 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:09:41.007 ************************************ 00:09:41.007 START TEST accel_dif_generate 00:09:41.007 ************************************ 00:09:41.007 14:23:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:09:41.007 14:23:47 -- accel/accel.sh@16 -- # local accel_opc 00:09:41.007 14:23:47 -- accel/accel.sh@17 -- # local accel_module 00:09:41.007 14:23:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:09:41.007 14:23:47 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.007 14:23:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:41.007 14:23:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.007 14:23:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.007 14:23:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.007 14:23:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.007 14:23:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.007 14:23:47 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.007 14:23:47 -- accel/accel.sh@42 -- # jq -r . 00:09:41.007 [2024-12-06 14:23:47.854933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.007 [2024-12-06 14:23:47.855018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:09:41.265 [2024-12-06 14:23:47.985347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.265 [2024-12-06 14:23:48.118107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.636 14:23:49 -- accel/accel.sh@18 -- # out=' 00:09:42.636 SPDK Configuration: 00:09:42.636 Core mask: 0x1 00:09:42.636 00:09:42.636 Accel Perf Configuration: 00:09:42.636 Workload Type: dif_generate 00:09:42.636 Vector size: 4096 bytes 00:09:42.636 Transfer size: 4096 bytes 00:09:42.636 Block size: 512 bytes 00:09:42.636 Metadata size: 8 bytes 00:09:42.636 Vector count 1 00:09:42.636 Module: software 00:09:42.636 Queue depth: 32 00:09:42.636 Allocate depth: 32 00:09:42.636 # threads/core: 1 00:09:42.636 Run time: 1 seconds 00:09:42.636 Verify: No 00:09:42.636 00:09:42.636 Running for 1 seconds... 00:09:42.636 00:09:42.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:42.636 ------------------------------------------------------------------------------------ 00:09:42.636 0,0 122144/s 484 MiB/s 0 0 00:09:42.636 ==================================================================================== 00:09:42.636 Total 122144/s 477 MiB/s 0 0' 00:09:42.636 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:42.636 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:42.636 14:23:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:42.636 14:23:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:42.636 14:23:49 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.636 14:23:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.636 14:23:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.636 14:23:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.636 14:23:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.636 14:23:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.636 14:23:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.636 14:23:49 -- accel/accel.sh@42 -- # jq -r . 00:09:42.636 [2024-12-06 14:23:49.503431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:42.636 [2024-12-06 14:23:49.503533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59453 ] 00:09:42.894 [2024-12-06 14:23:49.635669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.894 [2024-12-06 14:23:49.787121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=0x1 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=dif_generate 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=software 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@23 -- # accel_module=software 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=32 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=32 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=1 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val=No 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:43.153 14:23:49 -- accel/accel.sh@21 -- # val= 00:09:43.153 14:23:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # IFS=: 00:09:43.153 14:23:49 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 14:23:51 -- accel/accel.sh@21 -- # val= 00:09:44.526 14:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # IFS=: 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 14:23:51 -- accel/accel.sh@21 -- # val= 00:09:44.526 14:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # IFS=: 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 14:23:51 -- accel/accel.sh@21 -- # val= 00:09:44.526 14:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # IFS=: 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 14:23:51 -- accel/accel.sh@21 -- # val= 00:09:44.526 14:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # IFS=: 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 ************************************ 00:09:44.526 END TEST accel_dif_generate 00:09:44.526 ************************************ 00:09:44.526 14:23:51 -- accel/accel.sh@21 -- # val= 00:09:44.526 14:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # IFS=: 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 14:23:51 -- accel/accel.sh@21 -- # val= 00:09:44.526 14:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # IFS=: 00:09:44.526 14:23:51 -- accel/accel.sh@20 -- # read -r var val 00:09:44.526 14:23:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:44.526 14:23:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:09:44.526 14:23:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.526 00:09:44.526 real 0m3.239s 00:09:44.526 user 0m2.730s 00:09:44.526 sys 0m0.300s 00:09:44.526 14:23:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.526 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:44.527 14:23:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:44.527 14:23:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:44.527 14:23:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.527 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:44.527 ************************************ 00:09:44.527 START TEST accel_dif_generate_copy 00:09:44.527 ************************************ 00:09:44.527 14:23:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:09:44.527 14:23:51 -- accel/accel.sh@16 -- # local accel_opc 00:09:44.527 14:23:51 -- accel/accel.sh@17 -- # local accel_module 00:09:44.527 14:23:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:09:44.527 14:23:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:44.527 14:23:51 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.527 14:23:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.527 14:23:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.527 14:23:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.527 14:23:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.527 14:23:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.527 14:23:51 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.527 14:23:51 -- accel/accel.sh@42 -- # jq -r . 00:09:44.527 [2024-12-06 14:23:51.148308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:44.527 [2024-12-06 14:23:51.148612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59482 ] 00:09:44.527 [2024-12-06 14:23:51.287684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.527 [2024-12-06 14:23:51.446144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.905 14:23:52 -- accel/accel.sh@18 -- # out=' 00:09:45.905 SPDK Configuration: 00:09:45.905 Core mask: 0x1 00:09:45.905 00:09:45.905 Accel Perf Configuration: 00:09:45.905 Workload Type: dif_generate_copy 00:09:45.905 Vector size: 4096 bytes 00:09:45.905 Transfer size: 4096 bytes 00:09:45.905 Vector count 1 00:09:45.905 Module: software 00:09:45.905 Queue depth: 32 00:09:45.905 Allocate depth: 32 00:09:45.905 # threads/core: 1 00:09:45.905 Run time: 1 seconds 00:09:45.905 Verify: No 00:09:45.905 00:09:45.905 Running for 1 seconds... 00:09:45.905 00:09:45.905 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:45.905 ------------------------------------------------------------------------------------ 00:09:45.905 0,0 97184/s 385 MiB/s 0 0 00:09:45.905 ==================================================================================== 00:09:45.905 Total 97184/s 379 MiB/s 0 0' 00:09:45.905 14:23:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:45.905 14:23:52 -- accel/accel.sh@20 -- # IFS=: 00:09:45.905 14:23:52 -- accel/accel.sh@20 -- # read -r var val 00:09:45.905 14:23:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:45.905 14:23:52 -- accel/accel.sh@12 -- # build_accel_config 00:09:45.905 14:23:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:45.905 14:23:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:45.905 14:23:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.905 14:23:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:45.905 14:23:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:45.905 14:23:52 -- accel/accel.sh@41 -- # local IFS=, 00:09:45.905 14:23:52 -- accel/accel.sh@42 -- # jq -r . 00:09:45.905 [2024-12-06 14:23:52.845560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:45.905 [2024-12-06 14:23:52.845675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59507 ] 00:09:46.164 [2024-12-06 14:23:52.988543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.424 [2024-12-06 14:23:53.149999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=0x1 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=software 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@23 -- # accel_module=software 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=32 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=32 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=1 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val=No 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:46.424 14:23:53 -- accel/accel.sh@21 -- # val= 00:09:46.424 14:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # IFS=: 00:09:46.424 14:23:53 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@21 -- # val= 00:09:47.802 14:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # IFS=: 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@21 -- # val= 00:09:47.802 14:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # IFS=: 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@21 -- # val= 00:09:47.802 14:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # IFS=: 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@21 -- # val= 00:09:47.802 14:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # IFS=: 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@21 -- # val= 00:09:47.802 14:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # IFS=: 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@21 -- # val= 00:09:47.802 14:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # IFS=: 00:09:47.802 14:23:54 -- accel/accel.sh@20 -- # read -r var val 00:09:47.802 14:23:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:47.802 14:23:54 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:09:47.802 14:23:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:47.802 ************************************ 00:09:47.802 END TEST accel_dif_generate_copy 00:09:47.802 ************************************ 00:09:47.802 00:09:47.802 real 0m3.408s 00:09:47.802 user 0m2.874s 00:09:47.802 sys 0m0.322s 00:09:47.802 14:23:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:47.802 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:09:47.802 14:23:54 -- accel/accel.sh@107 -- # [[ y == y ]] 00:09:47.802 14:23:54 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:47.802 14:23:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:47.802 14:23:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.802 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:09:47.802 ************************************ 00:09:47.802 START TEST accel_comp 00:09:47.802 ************************************ 00:09:47.802 14:23:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:47.802 14:23:54 -- accel/accel.sh@16 -- # local accel_opc 00:09:47.802 14:23:54 -- accel/accel.sh@17 -- # local accel_module 00:09:47.802 14:23:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:47.802 14:23:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:47.802 14:23:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:47.802 14:23:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:47.802 14:23:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:47.802 14:23:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:47.802 14:23:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:47.802 14:23:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:47.802 14:23:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:47.802 14:23:54 -- accel/accel.sh@42 -- # jq -r . 00:09:47.802 [2024-12-06 14:23:54.611290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.802 [2024-12-06 14:23:54.611397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59545 ] 00:09:47.802 [2024-12-06 14:23:54.755816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.060 [2024-12-06 14:23:54.937608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.436 14:23:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:49.436 00:09:49.436 SPDK Configuration: 00:09:49.436 Core mask: 0x1 00:09:49.436 00:09:49.436 Accel Perf Configuration: 00:09:49.436 Workload Type: compress 00:09:49.436 Transfer size: 4096 bytes 00:09:49.436 Vector count 1 00:09:49.436 Module: software 00:09:49.436 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:49.436 Queue depth: 32 00:09:49.436 Allocate depth: 32 00:09:49.436 # threads/core: 1 00:09:49.436 Run time: 1 seconds 00:09:49.436 Verify: No 00:09:49.436 00:09:49.436 Running for 1 seconds... 00:09:49.436 00:09:49.436 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:49.436 ------------------------------------------------------------------------------------ 00:09:49.436 0,0 48672/s 202 MiB/s 0 0 00:09:49.436 ==================================================================================== 00:09:49.436 Total 48672/s 190 MiB/s 0 0' 00:09:49.436 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.436 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.436 14:23:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:49.436 14:23:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:49.436 14:23:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:49.436 14:23:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.436 14:23:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.436 14:23:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.436 14:23:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.436 14:23:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.436 14:23:56 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.436 14:23:56 -- accel/accel.sh@42 -- # jq -r . 00:09:49.436 [2024-12-06 14:23:56.332957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:49.436 [2024-12-06 14:23:56.333584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59561 ] 00:09:49.695 [2024-12-06 14:23:56.470339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.695 [2024-12-06 14:23:56.624350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=0x1 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=compress 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@24 -- # accel_opc=compress 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=software 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@23 -- # accel_module=software 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=32 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=32 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=1 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val=No 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:49.954 14:23:56 -- accel/accel.sh@21 -- # val= 00:09:49.954 14:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # IFS=: 00:09:49.954 14:23:56 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@21 -- # val= 00:09:51.331 14:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # IFS=: 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@21 -- # val= 00:09:51.331 14:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # IFS=: 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@21 -- # val= 00:09:51.331 14:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # IFS=: 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@21 -- # val= 00:09:51.331 14:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # IFS=: 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@21 -- # val= 00:09:51.331 14:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # IFS=: 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@21 -- # val= 00:09:51.331 14:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # IFS=: 00:09:51.331 14:23:57 -- accel/accel.sh@20 -- # read -r var val 00:09:51.331 14:23:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:51.331 14:23:57 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:09:51.331 14:23:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:51.331 00:09:51.331 real 0m3.399s 00:09:51.331 user 0m2.865s 00:09:51.331 sys 0m0.321s 00:09:51.331 14:23:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.331 ************************************ 00:09:51.331 END TEST accel_comp 00:09:51.331 ************************************ 00:09:51.331 14:23:57 -- common/autotest_common.sh@10 -- # set +x 00:09:51.331 14:23:58 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.331 14:23:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:51.331 14:23:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.331 14:23:58 -- common/autotest_common.sh@10 -- # set +x 00:09:51.331 ************************************ 00:09:51.331 START TEST accel_decomp 00:09:51.331 ************************************ 00:09:51.331 14:23:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.331 14:23:58 -- accel/accel.sh@16 -- # local accel_opc 00:09:51.331 14:23:58 -- accel/accel.sh@17 -- # local accel_module 00:09:51.331 14:23:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.331 14:23:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.331 14:23:58 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.331 14:23:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:51.331 14:23:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.331 14:23:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.331 14:23:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:51.331 14:23:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:51.331 14:23:58 -- accel/accel.sh@41 -- # local IFS=, 00:09:51.331 14:23:58 -- accel/accel.sh@42 -- # jq -r . 00:09:51.332 [2024-12-06 14:23:58.069910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:51.332 [2024-12-06 14:23:58.070017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:09:51.332 [2024-12-06 14:23:58.208312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.590 [2024-12-06 14:23:58.338785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.000 14:23:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:53.000 00:09:53.000 SPDK Configuration: 00:09:53.000 Core mask: 0x1 00:09:53.000 00:09:53.000 Accel Perf Configuration: 00:09:53.000 Workload Type: decompress 00:09:53.000 Transfer size: 4096 bytes 00:09:53.000 Vector count 1 00:09:53.000 Module: software 00:09:53.000 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:53.000 Queue depth: 32 00:09:53.000 Allocate depth: 32 00:09:53.000 # threads/core: 1 00:09:53.000 Run time: 1 seconds 00:09:53.000 Verify: Yes 00:09:53.000 00:09:53.000 Running for 1 seconds... 00:09:53.000 00:09:53.000 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:53.000 ------------------------------------------------------------------------------------ 00:09:53.000 0,0 59328/s 109 MiB/s 0 0 00:09:53.000 ==================================================================================== 00:09:53.000 Total 59328/s 231 MiB/s 0 0' 00:09:53.000 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.000 14:23:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:53.000 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.000 14:23:59 -- accel/accel.sh@12 -- # build_accel_config 00:09:53.000 14:23:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:53.000 14:23:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:53.000 14:23:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.000 14:23:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.000 14:23:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:53.000 14:23:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:53.000 14:23:59 -- accel/accel.sh@41 -- # local IFS=, 00:09:53.000 14:23:59 -- accel/accel.sh@42 -- # jq -r . 00:09:53.000 [2024-12-06 14:23:59.623572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:53.001 [2024-12-06 14:23:59.623691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:09:53.001 [2024-12-06 14:23:59.762560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.001 [2024-12-06 14:23:59.892190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val=0x1 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val=decompress 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.001 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.001 14:23:59 -- accel/accel.sh@21 -- # val=software 00:09:53.001 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.001 14:23:59 -- accel/accel.sh@23 -- # accel_module=software 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val=32 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val=32 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val=1 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val=Yes 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:53.260 14:23:59 -- accel/accel.sh@21 -- # val= 00:09:53.260 14:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # IFS=: 00:09:53.260 14:23:59 -- accel/accel.sh@20 -- # read -r var val 00:09:54.195 14:24:01 -- accel/accel.sh@21 -- # val= 00:09:54.195 14:24:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.195 14:24:01 -- accel/accel.sh@20 -- # IFS=: 00:09:54.195 14:24:01 -- accel/accel.sh@20 -- # read -r var val 00:09:54.195 14:24:01 -- accel/accel.sh@21 -- # val= 00:09:54.195 14:24:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.195 14:24:01 -- accel/accel.sh@20 -- # IFS=: 00:09:54.195 14:24:01 -- accel/accel.sh@20 -- # read -r var val 00:09:54.195 14:24:01 -- accel/accel.sh@21 -- # val= 00:09:54.195 14:24:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.195 14:24:01 -- accel/accel.sh@20 -- # IFS=: 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # read -r var val 00:09:54.454 14:24:01 -- accel/accel.sh@21 -- # val= 00:09:54.454 14:24:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # IFS=: 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # read -r var val 00:09:54.454 14:24:01 -- accel/accel.sh@21 -- # val= 00:09:54.454 14:24:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # IFS=: 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # read -r var val 00:09:54.454 14:24:01 -- accel/accel.sh@21 -- # val= 00:09:54.454 14:24:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # IFS=: 00:09:54.454 14:24:01 -- accel/accel.sh@20 -- # read -r var val 00:09:54.454 14:24:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:54.454 ************************************ 00:09:54.454 END TEST accel_decomp 00:09:54.454 ************************************ 00:09:54.454 14:24:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:54.454 14:24:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.454 00:09:54.454 real 0m3.123s 00:09:54.454 user 0m2.658s 00:09:54.454 sys 0m0.251s 00:09:54.454 14:24:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.454 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.454 14:24:01 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:54.454 14:24:01 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:09:54.454 14:24:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.454 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.454 ************************************ 00:09:54.454 START TEST accel_decmop_full 00:09:54.454 ************************************ 00:09:54.454 14:24:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:54.454 14:24:01 -- accel/accel.sh@16 -- # local accel_opc 00:09:54.454 14:24:01 -- accel/accel.sh@17 -- # local accel_module 00:09:54.454 14:24:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:54.455 14:24:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:54.455 14:24:01 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.455 14:24:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.455 14:24:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.455 14:24:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.455 14:24:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.455 14:24:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.455 14:24:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.455 14:24:01 -- accel/accel.sh@42 -- # jq -r . 00:09:54.455 [2024-12-06 14:24:01.254692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:54.455 [2024-12-06 14:24:01.254836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:09:54.455 [2024-12-06 14:24:01.391932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.712 [2024-12-06 14:24:01.494216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.086 14:24:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:56.086 00:09:56.086 SPDK Configuration: 00:09:56.086 Core mask: 0x1 00:09:56.086 00:09:56.087 Accel Perf Configuration: 00:09:56.087 Workload Type: decompress 00:09:56.087 Transfer size: 111250 bytes 00:09:56.087 Vector count 1 00:09:56.087 Module: software 00:09:56.087 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:56.087 Queue depth: 32 00:09:56.087 Allocate depth: 32 00:09:56.087 # threads/core: 1 00:09:56.087 Run time: 1 seconds 00:09:56.087 Verify: Yes 00:09:56.087 00:09:56.087 Running for 1 seconds... 00:09:56.087 00:09:56.087 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:56.087 ------------------------------------------------------------------------------------ 00:09:56.087 0,0 4320/s 178 MiB/s 0 0 00:09:56.087 ==================================================================================== 00:09:56.087 Total 4320/s 458 MiB/s 0 0' 00:09:56.087 14:24:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:56.087 14:24:02 -- accel/accel.sh@20 -- # IFS=: 00:09:56.087 14:24:02 -- accel/accel.sh@20 -- # read -r var val 00:09:56.087 14:24:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:56.087 14:24:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.087 14:24:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:56.087 14:24:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.087 14:24:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.087 14:24:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:56.087 14:24:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:56.087 14:24:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:56.087 14:24:02 -- accel/accel.sh@42 -- # jq -r . 00:09:56.087 [2024-12-06 14:24:02.793239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:56.087 [2024-12-06 14:24:02.793356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59675 ] 00:09:56.087 [2024-12-06 14:24:02.932102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.346 [2024-12-06 14:24:03.057446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=0x1 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=decompress 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=software 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@23 -- # accel_module=software 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=32 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=32 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=1 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val=Yes 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:56.346 14:24:03 -- accel/accel.sh@21 -- # val= 00:09:56.346 14:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # IFS=: 00:09:56.346 14:24:03 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@21 -- # val= 00:09:57.762 14:24:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # IFS=: 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@21 -- # val= 00:09:57.762 14:24:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # IFS=: 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@21 -- # val= 00:09:57.762 14:24:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # IFS=: 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@21 -- # val= 00:09:57.762 14:24:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # IFS=: 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@21 -- # val= 00:09:57.762 14:24:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # IFS=: 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@21 -- # val= 00:09:57.762 14:24:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # IFS=: 00:09:57.762 14:24:04 -- accel/accel.sh@20 -- # read -r var val 00:09:57.762 14:24:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:57.762 14:24:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:57.762 14:24:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:57.762 00:09:57.762 real 0m3.130s 00:09:57.762 user 0m2.678s 00:09:57.762 sys 0m0.242s 00:09:57.762 14:24:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.762 14:24:04 -- common/autotest_common.sh@10 -- # set +x 00:09:57.762 ************************************ 00:09:57.762 END TEST accel_decmop_full 00:09:57.762 ************************************ 00:09:57.762 14:24:04 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:57.762 14:24:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:09:57.762 14:24:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.762 14:24:04 -- common/autotest_common.sh@10 -- # set +x 00:09:57.762 ************************************ 00:09:57.762 START TEST accel_decomp_mcore 00:09:57.762 ************************************ 00:09:57.762 14:24:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:57.762 14:24:04 -- accel/accel.sh@16 -- # local accel_opc 00:09:57.762 14:24:04 -- accel/accel.sh@17 -- # local accel_module 00:09:57.762 14:24:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:57.762 14:24:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:57.762 14:24:04 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.762 14:24:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.762 14:24:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.762 14:24:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.762 14:24:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.762 14:24:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.762 14:24:04 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.762 14:24:04 -- accel/accel.sh@42 -- # jq -r . 00:09:57.762 [2024-12-06 14:24:04.437527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:57.762 [2024-12-06 14:24:04.437636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59709 ] 00:09:57.762 [2024-12-06 14:24:04.577348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.762 [2024-12-06 14:24:04.719549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.762 [2024-12-06 14:24:04.719645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.762 [2024-12-06 14:24:04.719827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.762 [2024-12-06 14:24:04.719830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.660 14:24:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:59.660 00:09:59.660 SPDK Configuration: 00:09:59.660 Core mask: 0xf 00:09:59.660 00:09:59.660 Accel Perf Configuration: 00:09:59.660 Workload Type: decompress 00:09:59.660 Transfer size: 4096 bytes 00:09:59.660 Vector count 1 00:09:59.660 Module: software 00:09:59.660 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:59.660 Queue depth: 32 00:09:59.660 Allocate depth: 32 00:09:59.660 # threads/core: 1 00:09:59.660 Run time: 1 seconds 00:09:59.660 Verify: Yes 00:09:59.660 00:09:59.660 Running for 1 seconds... 00:09:59.660 00:09:59.660 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:59.660 ------------------------------------------------------------------------------------ 00:09:59.660 0,0 43936/s 80 MiB/s 0 0 00:09:59.660 3,0 44096/s 81 MiB/s 0 0 00:09:59.660 2,0 44704/s 82 MiB/s 0 0 00:09:59.660 1,0 44672/s 82 MiB/s 0 0 00:09:59.660 ==================================================================================== 00:09:59.660 Total 177408/s 693 MiB/s 0 0' 00:09:59.660 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.660 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.660 14:24:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:59.660 14:24:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:59.660 14:24:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.660 14:24:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.660 14:24:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.660 14:24:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.660 14:24:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.660 14:24:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.660 14:24:06 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.660 14:24:06 -- accel/accel.sh@42 -- # jq -r . 00:09:59.660 [2024-12-06 14:24:06.295017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:59.660 [2024-12-06 14:24:06.295130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:09:59.660 [2024-12-06 14:24:06.429588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.660 [2024-12-06 14:24:06.591494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.660 [2024-12-06 14:24:06.591663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.660 [2024-12-06 14:24:06.591776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.660 [2024-12-06 14:24:06.591789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=0xf 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=decompress 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=software 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@23 -- # accel_module=software 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=32 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=32 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=1 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val=Yes 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:09:59.919 14:24:06 -- accel/accel.sh@21 -- # val= 00:09:59.919 14:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # IFS=: 00:09:59.919 14:24:06 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@21 -- # val= 00:10:01.294 14:24:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # IFS=: 00:10:01.294 14:24:07 -- accel/accel.sh@20 -- # read -r var val 00:10:01.294 14:24:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:01.294 14:24:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:01.294 14:24:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:01.294 00:10:01.294 real 0m3.588s 00:10:01.294 user 0m10.677s 00:10:01.294 sys 0m0.371s 00:10:01.294 14:24:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:01.294 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:10:01.294 ************************************ 00:10:01.294 END TEST accel_decomp_mcore 00:10:01.294 ************************************ 00:10:01.294 14:24:08 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:01.294 14:24:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:01.294 14:24:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.294 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:10:01.294 ************************************ 00:10:01.294 START TEST accel_decomp_full_mcore 00:10:01.294 ************************************ 00:10:01.294 14:24:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:01.294 14:24:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:01.294 14:24:08 -- accel/accel.sh@17 -- # local accel_module 00:10:01.294 14:24:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:01.294 14:24:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:01.294 14:24:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.294 14:24:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.294 14:24:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.294 14:24:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.294 14:24:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.294 14:24:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.294 14:24:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.294 14:24:08 -- accel/accel.sh@42 -- # jq -r . 00:10:01.295 [2024-12-06 14:24:08.082036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:01.295 [2024-12-06 14:24:08.082652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59775 ] 00:10:01.295 [2024-12-06 14:24:08.227757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.584 [2024-12-06 14:24:08.354821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.584 [2024-12-06 14:24:08.354989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.584 [2024-12-06 14:24:08.355137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.584 [2024-12-06 14:24:08.355140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.962 14:24:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:02.962 00:10:02.962 SPDK Configuration: 00:10:02.962 Core mask: 0xf 00:10:02.962 00:10:02.962 Accel Perf Configuration: 00:10:02.962 Workload Type: decompress 00:10:02.962 Transfer size: 111250 bytes 00:10:02.962 Vector count 1 00:10:02.962 Module: software 00:10:02.962 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:02.962 Queue depth: 32 00:10:02.962 Allocate depth: 32 00:10:02.962 # threads/core: 1 00:10:02.962 Run time: 1 seconds 00:10:02.962 Verify: Yes 00:10:02.962 00:10:02.962 Running for 1 seconds... 00:10:02.962 00:10:02.962 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:02.962 ------------------------------------------------------------------------------------ 00:10:02.962 0,0 4352/s 179 MiB/s 0 0 00:10:02.962 3,0 4448/s 183 MiB/s 0 0 00:10:02.962 2,0 4448/s 183 MiB/s 0 0 00:10:02.962 1,0 4032/s 166 MiB/s 0 0 00:10:02.962 ==================================================================================== 00:10:02.962 Total 17280/s 1833 MiB/s 0 0' 00:10:02.962 14:24:09 -- accel/accel.sh@20 -- # IFS=: 00:10:02.962 14:24:09 -- accel/accel.sh@20 -- # read -r var val 00:10:02.962 14:24:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:02.962 14:24:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:02.962 14:24:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.962 14:24:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.962 14:24:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.962 14:24:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.962 14:24:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.962 14:24:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.962 14:24:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.962 14:24:09 -- accel/accel.sh@42 -- # jq -r . 00:10:02.962 [2024-12-06 14:24:09.683705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:02.962 [2024-12-06 14:24:09.684357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59797 ] 00:10:02.962 [2024-12-06 14:24:09.823235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.221 [2024-12-06 14:24:10.000252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.221 [2024-12-06 14:24:10.000460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.221 [2024-12-06 14:24:10.000526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.221 [2024-12-06 14:24:10.000526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=0xf 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=decompress 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=software 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@23 -- # accel_module=software 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=32 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=32 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=1 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val=Yes 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:03.221 14:24:10 -- accel/accel.sh@21 -- # val= 00:10:03.221 14:24:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # IFS=: 00:10:03.221 14:24:10 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@21 -- # val= 00:10:04.598 14:24:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # IFS=: 00:10:04.598 14:24:11 -- accel/accel.sh@20 -- # read -r var val 00:10:04.598 14:24:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:04.598 14:24:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:04.598 14:24:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.598 00:10:04.598 real 0m3.437s 00:10:04.598 user 0m10.189s 00:10:04.598 sys 0m0.418s 00:10:04.598 14:24:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.598 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:10:04.598 ************************************ 00:10:04.598 END TEST accel_decomp_full_mcore 00:10:04.598 ************************************ 00:10:04.598 14:24:11 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:04.598 14:24:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:04.598 14:24:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.598 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:10:04.598 ************************************ 00:10:04.598 START TEST accel_decomp_mthread 00:10:04.598 ************************************ 00:10:04.598 14:24:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:04.598 14:24:11 -- accel/accel.sh@16 -- # local accel_opc 00:10:04.598 14:24:11 -- accel/accel.sh@17 -- # local accel_module 00:10:04.598 14:24:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:04.598 14:24:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:04.598 14:24:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.598 14:24:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.598 14:24:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.598 14:24:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.598 14:24:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.598 14:24:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.598 14:24:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.598 14:24:11 -- accel/accel.sh@42 -- # jq -r . 00:10:04.856 [2024-12-06 14:24:11.580433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.856 [2024-12-06 14:24:11.580753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:10:04.856 [2024-12-06 14:24:11.718691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.113 [2024-12-06 14:24:11.836455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.487 14:24:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:06.487 00:10:06.487 SPDK Configuration: 00:10:06.487 Core mask: 0x1 00:10:06.487 00:10:06.487 Accel Perf Configuration: 00:10:06.487 Workload Type: decompress 00:10:06.487 Transfer size: 4096 bytes 00:10:06.487 Vector count 1 00:10:06.487 Module: software 00:10:06.487 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:06.487 Queue depth: 32 00:10:06.487 Allocate depth: 32 00:10:06.487 # threads/core: 2 00:10:06.487 Run time: 1 seconds 00:10:06.487 Verify: Yes 00:10:06.487 00:10:06.487 Running for 1 seconds... 00:10:06.487 00:10:06.487 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:06.487 ------------------------------------------------------------------------------------ 00:10:06.487 0,1 33920/s 62 MiB/s 0 0 00:10:06.487 0,0 33824/s 62 MiB/s 0 0 00:10:06.487 ==================================================================================== 00:10:06.487 Total 67744/s 264 MiB/s 0 0' 00:10:06.488 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.488 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.488 14:24:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:06.488 14:24:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.488 14:24:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:06.488 14:24:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.488 14:24:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.488 14:24:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.488 14:24:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.488 14:24:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.488 14:24:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.488 14:24:13 -- accel/accel.sh@42 -- # jq -r . 00:10:06.488 [2024-12-06 14:24:13.131692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.488 [2024-12-06 14:24:13.131809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59860 ] 00:10:06.488 [2024-12-06 14:24:13.273155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.488 [2024-12-06 14:24:13.413080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=0x1 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=decompress 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=software 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@23 -- # accel_module=software 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=32 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=32 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=2 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val=Yes 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:06.746 14:24:13 -- accel/accel.sh@21 -- # val= 00:10:06.746 14:24:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # IFS=: 00:10:06.746 14:24:13 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@21 -- # val= 00:10:08.119 14:24:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # IFS=: 00:10:08.119 14:24:14 -- accel/accel.sh@20 -- # read -r var val 00:10:08.119 14:24:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:08.119 ************************************ 00:10:08.119 END TEST accel_decomp_mthread 00:10:08.119 ************************************ 00:10:08.119 14:24:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:08.119 14:24:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.119 00:10:08.119 real 0m3.251s 00:10:08.119 user 0m2.784s 00:10:08.119 sys 0m0.253s 00:10:08.119 14:24:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.119 14:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 14:24:14 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:08.119 14:24:14 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:08.119 14:24:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.119 14:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:08.119 ************************************ 00:10:08.119 START TEST accel_deomp_full_mthread 00:10:08.119 ************************************ 00:10:08.119 14:24:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:08.119 14:24:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:08.119 14:24:14 -- accel/accel.sh@17 -- # local accel_module 00:10:08.119 14:24:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:08.119 14:24:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:08.119 14:24:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.119 14:24:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.119 14:24:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.119 14:24:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.120 14:24:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.120 14:24:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.120 14:24:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.120 14:24:14 -- accel/accel.sh@42 -- # jq -r . 00:10:08.120 [2024-12-06 14:24:14.884061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.120 [2024-12-06 14:24:14.884321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59894 ] 00:10:08.120 [2024-12-06 14:24:15.020357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.378 [2024-12-06 14:24:15.173518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.752 14:24:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:09.752 00:10:09.752 SPDK Configuration: 00:10:09.752 Core mask: 0x1 00:10:09.752 00:10:09.752 Accel Perf Configuration: 00:10:09.752 Workload Type: decompress 00:10:09.752 Transfer size: 111250 bytes 00:10:09.752 Vector count 1 00:10:09.752 Module: software 00:10:09.752 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:09.752 Queue depth: 32 00:10:09.752 Allocate depth: 32 00:10:09.752 # threads/core: 2 00:10:09.752 Run time: 1 seconds 00:10:09.752 Verify: Yes 00:10:09.752 00:10:09.752 Running for 1 seconds... 00:10:09.752 00:10:09.752 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:09.752 ------------------------------------------------------------------------------------ 00:10:09.752 0,1 2272/s 93 MiB/s 0 0 00:10:09.752 0,0 2272/s 93 MiB/s 0 0 00:10:09.752 ==================================================================================== 00:10:09.752 Total 4544/s 482 MiB/s 0 0' 00:10:09.752 14:24:16 -- accel/accel.sh@20 -- # IFS=: 00:10:09.752 14:24:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:09.752 14:24:16 -- accel/accel.sh@20 -- # read -r var val 00:10:09.752 14:24:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:09.752 14:24:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.752 14:24:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.752 14:24:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.752 14:24:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.752 14:24:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.752 14:24:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.752 14:24:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.752 14:24:16 -- accel/accel.sh@42 -- # jq -r . 00:10:09.752 [2024-12-06 14:24:16.623605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.752 [2024-12-06 14:24:16.624398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:10:10.010 [2024-12-06 14:24:16.762192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.010 [2024-12-06 14:24:16.932598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val=0x1 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val=decompress 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val=software 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@23 -- # accel_module=software 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.269 14:24:17 -- accel/accel.sh@21 -- # val=32 00:10:10.269 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.269 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.270 14:24:17 -- accel/accel.sh@21 -- # val=32 00:10:10.270 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.270 14:24:17 -- accel/accel.sh@21 -- # val=2 00:10:10.270 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.270 14:24:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:10.270 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.270 14:24:17 -- accel/accel.sh@21 -- # val=Yes 00:10:10.270 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.270 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.270 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:10.270 14:24:17 -- accel/accel.sh@21 -- # val= 00:10:10.270 14:24:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # IFS=: 00:10:10.270 14:24:17 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@21 -- # val= 00:10:11.670 14:24:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # IFS=: 00:10:11.670 14:24:18 -- accel/accel.sh@20 -- # read -r var val 00:10:11.670 14:24:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:11.670 14:24:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:11.670 14:24:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.670 00:10:11.670 real 0m3.531s 00:10:11.670 user 0m2.997s 00:10:11.670 sys 0m0.325s 00:10:11.670 14:24:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.670 14:24:18 -- common/autotest_common.sh@10 -- # set +x 00:10:11.670 ************************************ 00:10:11.670 END TEST accel_deomp_full_mthread 00:10:11.670 ************************************ 00:10:11.670 14:24:18 -- accel/accel.sh@116 -- # [[ n == y ]] 00:10:11.670 14:24:18 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:11.670 14:24:18 -- accel/accel.sh@129 -- # build_accel_config 00:10:11.670 14:24:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.670 14:24:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:11.670 14:24:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.670 14:24:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.670 14:24:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.670 14:24:18 -- common/autotest_common.sh@10 -- # set +x 00:10:11.670 14:24:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.670 14:24:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.670 14:24:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.670 14:24:18 -- accel/accel.sh@42 -- # jq -r . 00:10:11.670 ************************************ 00:10:11.670 START TEST accel_dif_functional_tests 00:10:11.670 ************************************ 00:10:11.670 14:24:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:11.670 [2024-12-06 14:24:18.506595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.670 [2024-12-06 14:24:18.507053] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59955 ] 00:10:11.929 [2024-12-06 14:24:18.647731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.929 [2024-12-06 14:24:18.811282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.929 [2024-12-06 14:24:18.811495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.929 [2024-12-06 14:24:18.811511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.189 00:10:12.189 00:10:12.189 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.189 http://cunit.sourceforge.net/ 00:10:12.189 00:10:12.189 00:10:12.189 Suite: accel_dif 00:10:12.189 Test: verify: DIF generated, GUARD check ...passed 00:10:12.189 Test: verify: DIF generated, APPTAG check ...passed 00:10:12.189 Test: verify: DIF generated, REFTAG check ...passed 00:10:12.189 Test: verify: DIF not generated, GUARD check ...[2024-12-06 14:24:18.964279] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:12.189 [2024-12-06 14:24:18.964525] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:12.189 passed 00:10:12.189 Test: verify: DIF not generated, APPTAG check ...[2024-12-06 14:24:18.964798] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:12.189 [2024-12-06 14:24:18.964981] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:12.189 passed 00:10:12.189 Test: verify: DIF not generated, REFTAG check ...[2024-12-06 14:24:18.965284] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:12.189 [2024-12-06 14:24:18.965475] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:12.189 passed 00:10:12.189 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:12.189 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-06 14:24:18.965818] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:12.189 passed 00:10:12.189 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:12.189 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:12.189 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:12.189 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-06 14:24:18.966351] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:12.189 passed 00:10:12.189 Test: generate copy: DIF generated, GUARD check ...passed 00:10:12.189 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:12.189 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:12.189 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:12.189 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:12.189 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:12.189 Test: generate copy: iovecs-len validate ...[2024-12-06 14:24:18.967328] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:12.189 passed 00:10:12.189 Test: generate copy: buffer alignment validate ...passed 00:10:12.189 00:10:12.189 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.189 suites 1 1 n/a 0 0 00:10:12.189 tests 20 20 20 0 0 00:10:12.189 asserts 204 204 204 0 n/a 00:10:12.189 00:10:12.189 Elapsed time = 0.005 seconds 00:10:12.448 00:10:12.448 real 0m0.958s 00:10:12.448 user 0m1.450s 00:10:12.448 sys 0m0.248s 00:10:12.448 14:24:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.448 ************************************ 00:10:12.448 END TEST accel_dif_functional_tests 00:10:12.448 ************************************ 00:10:12.448 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:12.708 00:10:12.709 real 1m13.279s 00:10:12.709 user 1m16.963s 00:10:12.709 sys 0m8.339s 00:10:12.709 14:24:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.709 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:12.709 ************************************ 00:10:12.709 END TEST accel 00:10:12.709 ************************************ 00:10:12.709 14:24:19 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:12.709 14:24:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.709 14:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.709 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:12.709 ************************************ 00:10:12.709 START TEST accel_rpc 00:10:12.709 ************************************ 00:10:12.709 14:24:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:12.709 * Looking for test storage... 00:10:12.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:12.709 14:24:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:12.709 14:24:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:12.709 14:24:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:12.709 14:24:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:12.709 14:24:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:12.709 14:24:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:12.709 14:24:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:12.709 14:24:19 -- scripts/common.sh@335 -- # IFS=.-: 00:10:12.709 14:24:19 -- scripts/common.sh@335 -- # read -ra ver1 00:10:12.709 14:24:19 -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.709 14:24:19 -- scripts/common.sh@336 -- # read -ra ver2 00:10:12.709 14:24:19 -- scripts/common.sh@337 -- # local 'op=<' 00:10:12.709 14:24:19 -- scripts/common.sh@339 -- # ver1_l=2 00:10:12.709 14:24:19 -- scripts/common.sh@340 -- # ver2_l=1 00:10:12.709 14:24:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:12.709 14:24:19 -- scripts/common.sh@343 -- # case "$op" in 00:10:12.709 14:24:19 -- scripts/common.sh@344 -- # : 1 00:10:12.709 14:24:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:12.709 14:24:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.709 14:24:19 -- scripts/common.sh@364 -- # decimal 1 00:10:12.709 14:24:19 -- scripts/common.sh@352 -- # local d=1 00:10:12.709 14:24:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.709 14:24:19 -- scripts/common.sh@354 -- # echo 1 00:10:12.709 14:24:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:12.709 14:24:19 -- scripts/common.sh@365 -- # decimal 2 00:10:12.709 14:24:19 -- scripts/common.sh@352 -- # local d=2 00:10:12.709 14:24:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.709 14:24:19 -- scripts/common.sh@354 -- # echo 2 00:10:12.709 14:24:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:12.709 14:24:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:12.709 14:24:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:12.709 14:24:19 -- scripts/common.sh@367 -- # return 0 00:10:12.709 14:24:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.709 14:24:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:12.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.709 --rc genhtml_branch_coverage=1 00:10:12.709 --rc genhtml_function_coverage=1 00:10:12.709 --rc genhtml_legend=1 00:10:12.709 --rc geninfo_all_blocks=1 00:10:12.709 --rc geninfo_unexecuted_blocks=1 00:10:12.709 00:10:12.709 ' 00:10:12.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.709 14:24:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:12.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.709 --rc genhtml_branch_coverage=1 00:10:12.709 --rc genhtml_function_coverage=1 00:10:12.709 --rc genhtml_legend=1 00:10:12.709 --rc geninfo_all_blocks=1 00:10:12.709 --rc geninfo_unexecuted_blocks=1 00:10:12.709 00:10:12.709 ' 00:10:12.709 14:24:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:12.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.709 --rc genhtml_branch_coverage=1 00:10:12.709 --rc genhtml_function_coverage=1 00:10:12.709 --rc genhtml_legend=1 00:10:12.709 --rc geninfo_all_blocks=1 00:10:12.709 --rc geninfo_unexecuted_blocks=1 00:10:12.709 00:10:12.709 ' 00:10:12.709 14:24:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:12.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.709 --rc genhtml_branch_coverage=1 00:10:12.709 --rc genhtml_function_coverage=1 00:10:12.709 --rc genhtml_legend=1 00:10:12.709 --rc geninfo_all_blocks=1 00:10:12.709 --rc geninfo_unexecuted_blocks=1 00:10:12.709 00:10:12.709 ' 00:10:12.709 14:24:19 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:12.709 14:24:19 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=60032 00:10:12.709 14:24:19 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:12.709 14:24:19 -- accel/accel_rpc.sh@15 -- # waitforlisten 60032 00:10:12.709 14:24:19 -- common/autotest_common.sh@829 -- # '[' -z 60032 ']' 00:10:12.709 14:24:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.709 14:24:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.709 14:24:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.709 14:24:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.709 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:12.968 [2024-12-06 14:24:19.731525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.968 [2024-12-06 14:24:19.731840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:10:12.968 [2024-12-06 14:24:19.871156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.226 [2024-12-06 14:24:19.996366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:13.226 [2024-12-06 14:24:19.996804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.160 14:24:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.160 14:24:20 -- common/autotest_common.sh@862 -- # return 0 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:14.160 14:24:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:14.160 14:24:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.160 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.160 ************************************ 00:10:14.160 START TEST accel_assign_opcode 00:10:14.160 ************************************ 00:10:14.160 14:24:20 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:14.160 14:24:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.160 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.160 [2024-12-06 14:24:20.813641] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:14.160 14:24:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:14.160 14:24:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.160 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.160 [2024-12-06 14:24:20.821614] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:14.160 14:24:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.160 14:24:20 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:14.160 14:24:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.160 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.418 14:24:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.418 14:24:21 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:14.418 14:24:21 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:14.418 14:24:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.418 14:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:14.418 14:24:21 -- accel/accel_rpc.sh@42 -- # grep software 00:10:14.418 14:24:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.418 software 00:10:14.418 00:10:14.418 real 0m0.434s 00:10:14.418 user 0m0.057s 00:10:14.418 sys 0m0.010s 00:10:14.418 14:24:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.418 14:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:14.418 ************************************ 00:10:14.418 END TEST accel_assign_opcode 00:10:14.418 ************************************ 00:10:14.418 14:24:21 -- accel/accel_rpc.sh@55 -- # killprocess 60032 00:10:14.418 14:24:21 -- common/autotest_common.sh@936 -- # '[' -z 60032 ']' 00:10:14.418 14:24:21 -- common/autotest_common.sh@940 -- # kill -0 60032 00:10:14.418 14:24:21 -- common/autotest_common.sh@941 -- # uname 00:10:14.418 14:24:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:14.418 14:24:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60032 00:10:14.418 killing process with pid 60032 00:10:14.418 14:24:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:14.418 14:24:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:14.419 14:24:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60032' 00:10:14.419 14:24:21 -- common/autotest_common.sh@955 -- # kill 60032 00:10:14.419 14:24:21 -- common/autotest_common.sh@960 -- # wait 60032 00:10:14.985 ************************************ 00:10:14.985 END TEST accel_rpc 00:10:14.985 ************************************ 00:10:14.985 00:10:14.985 real 0m2.449s 00:10:14.985 user 0m2.478s 00:10:14.985 sys 0m0.632s 00:10:14.985 14:24:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.985 14:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:15.244 14:24:21 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:15.244 14:24:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:15.244 14:24:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.244 14:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:15.244 ************************************ 00:10:15.244 START TEST app_cmdline 00:10:15.245 ************************************ 00:10:15.245 14:24:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:15.245 * Looking for test storage... 00:10:15.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:15.245 14:24:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:15.245 14:24:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:15.245 14:24:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:15.245 14:24:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:15.245 14:24:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:15.245 14:24:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:15.245 14:24:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:15.245 14:24:22 -- scripts/common.sh@335 -- # IFS=.-: 00:10:15.245 14:24:22 -- scripts/common.sh@335 -- # read -ra ver1 00:10:15.245 14:24:22 -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.245 14:24:22 -- scripts/common.sh@336 -- # read -ra ver2 00:10:15.245 14:24:22 -- scripts/common.sh@337 -- # local 'op=<' 00:10:15.245 14:24:22 -- scripts/common.sh@339 -- # ver1_l=2 00:10:15.245 14:24:22 -- scripts/common.sh@340 -- # ver2_l=1 00:10:15.245 14:24:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:15.245 14:24:22 -- scripts/common.sh@343 -- # case "$op" in 00:10:15.245 14:24:22 -- scripts/common.sh@344 -- # : 1 00:10:15.245 14:24:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:15.245 14:24:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.245 14:24:22 -- scripts/common.sh@364 -- # decimal 1 00:10:15.245 14:24:22 -- scripts/common.sh@352 -- # local d=1 00:10:15.245 14:24:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.245 14:24:22 -- scripts/common.sh@354 -- # echo 1 00:10:15.245 14:24:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:15.245 14:24:22 -- scripts/common.sh@365 -- # decimal 2 00:10:15.245 14:24:22 -- scripts/common.sh@352 -- # local d=2 00:10:15.245 14:24:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.245 14:24:22 -- scripts/common.sh@354 -- # echo 2 00:10:15.245 14:24:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:15.245 14:24:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:15.245 14:24:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:15.245 14:24:22 -- scripts/common.sh@367 -- # return 0 00:10:15.245 14:24:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.245 14:24:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.245 --rc genhtml_branch_coverage=1 00:10:15.245 --rc genhtml_function_coverage=1 00:10:15.245 --rc genhtml_legend=1 00:10:15.245 --rc geninfo_all_blocks=1 00:10:15.245 --rc geninfo_unexecuted_blocks=1 00:10:15.245 00:10:15.245 ' 00:10:15.245 14:24:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.245 --rc genhtml_branch_coverage=1 00:10:15.245 --rc genhtml_function_coverage=1 00:10:15.245 --rc genhtml_legend=1 00:10:15.245 --rc geninfo_all_blocks=1 00:10:15.245 --rc geninfo_unexecuted_blocks=1 00:10:15.245 00:10:15.245 ' 00:10:15.245 14:24:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.245 --rc genhtml_branch_coverage=1 00:10:15.245 --rc genhtml_function_coverage=1 00:10:15.245 --rc genhtml_legend=1 00:10:15.245 --rc geninfo_all_blocks=1 00:10:15.245 --rc geninfo_unexecuted_blocks=1 00:10:15.245 00:10:15.245 ' 00:10:15.245 14:24:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.245 --rc genhtml_branch_coverage=1 00:10:15.245 --rc genhtml_function_coverage=1 00:10:15.245 --rc genhtml_legend=1 00:10:15.245 --rc geninfo_all_blocks=1 00:10:15.245 --rc geninfo_unexecuted_blocks=1 00:10:15.245 00:10:15.245 ' 00:10:15.245 14:24:22 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:15.245 14:24:22 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60159 00:10:15.245 14:24:22 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:15.245 14:24:22 -- app/cmdline.sh@18 -- # waitforlisten 60159 00:10:15.245 14:24:22 -- common/autotest_common.sh@829 -- # '[' -z 60159 ']' 00:10:15.245 14:24:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.245 14:24:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.245 14:24:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.245 14:24:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.245 14:24:22 -- common/autotest_common.sh@10 -- # set +x 00:10:15.566 [2024-12-06 14:24:22.242595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:15.566 [2024-12-06 14:24:22.243041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60159 ] 00:10:15.566 [2024-12-06 14:24:22.379117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.566 [2024-12-06 14:24:22.509281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.566 [2024-12-06 14:24:22.509771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.518 14:24:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.518 14:24:23 -- common/autotest_common.sh@862 -- # return 0 00:10:16.518 14:24:23 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:16.778 { 00:10:16.778 "fields": { 00:10:16.778 "commit": "c13c99a5e", 00:10:16.778 "major": 24, 00:10:16.778 "minor": 1, 00:10:16.778 "patch": 1, 00:10:16.778 "suffix": "-pre" 00:10:16.778 }, 00:10:16.778 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:10:16.778 } 00:10:16.778 14:24:23 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:16.778 14:24:23 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:16.778 14:24:23 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:16.778 14:24:23 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:16.778 14:24:23 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:16.778 14:24:23 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:16.778 14:24:23 -- app/cmdline.sh@26 -- # sort 00:10:16.778 14:24:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.778 14:24:23 -- common/autotest_common.sh@10 -- # set +x 00:10:16.778 14:24:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.778 14:24:23 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:16.778 14:24:23 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:16.778 14:24:23 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.778 14:24:23 -- common/autotest_common.sh@650 -- # local es=0 00:10:16.778 14:24:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.778 14:24:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.778 14:24:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.778 14:24:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.778 14:24:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.778 14:24:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.778 14:24:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.778 14:24:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.778 14:24:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:16.778 14:24:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:17.037 2024/12/06 14:24:23 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:10:17.037 request: 00:10:17.037 { 00:10:17.037 "method": "env_dpdk_get_mem_stats", 00:10:17.037 "params": {} 00:10:17.037 } 00:10:17.037 Got JSON-RPC error response 00:10:17.037 GoRPCClient: error on JSON-RPC call 00:10:17.037 14:24:23 -- common/autotest_common.sh@653 -- # es=1 00:10:17.037 14:24:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.037 14:24:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.037 14:24:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.037 14:24:23 -- app/cmdline.sh@1 -- # killprocess 60159 00:10:17.037 14:24:23 -- common/autotest_common.sh@936 -- # '[' -z 60159 ']' 00:10:17.037 14:24:23 -- common/autotest_common.sh@940 -- # kill -0 60159 00:10:17.037 14:24:23 -- common/autotest_common.sh@941 -- # uname 00:10:17.037 14:24:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.037 14:24:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60159 00:10:17.037 14:24:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:17.037 14:24:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:17.037 killing process with pid 60159 00:10:17.037 14:24:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60159' 00:10:17.037 14:24:23 -- common/autotest_common.sh@955 -- # kill 60159 00:10:17.037 14:24:23 -- common/autotest_common.sh@960 -- # wait 60159 00:10:17.605 00:10:17.605 real 0m2.459s 00:10:17.605 user 0m3.078s 00:10:17.605 sys 0m0.560s 00:10:17.605 14:24:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.605 14:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:17.605 ************************************ 00:10:17.605 END TEST app_cmdline 00:10:17.605 ************************************ 00:10:17.605 14:24:24 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:17.605 14:24:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.605 14:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.605 14:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:17.605 ************************************ 00:10:17.605 START TEST version 00:10:17.605 ************************************ 00:10:17.605 14:24:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:17.605 * Looking for test storage... 00:10:17.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:17.864 14:24:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:17.864 14:24:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:17.864 14:24:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:17.864 14:24:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:17.864 14:24:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:17.864 14:24:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:17.864 14:24:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:17.864 14:24:24 -- scripts/common.sh@335 -- # IFS=.-: 00:10:17.864 14:24:24 -- scripts/common.sh@335 -- # read -ra ver1 00:10:17.864 14:24:24 -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.864 14:24:24 -- scripts/common.sh@336 -- # read -ra ver2 00:10:17.864 14:24:24 -- scripts/common.sh@337 -- # local 'op=<' 00:10:17.864 14:24:24 -- scripts/common.sh@339 -- # ver1_l=2 00:10:17.864 14:24:24 -- scripts/common.sh@340 -- # ver2_l=1 00:10:17.864 14:24:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:17.864 14:24:24 -- scripts/common.sh@343 -- # case "$op" in 00:10:17.864 14:24:24 -- scripts/common.sh@344 -- # : 1 00:10:17.864 14:24:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:17.864 14:24:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.864 14:24:24 -- scripts/common.sh@364 -- # decimal 1 00:10:17.864 14:24:24 -- scripts/common.sh@352 -- # local d=1 00:10:17.864 14:24:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.864 14:24:24 -- scripts/common.sh@354 -- # echo 1 00:10:17.864 14:24:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:17.864 14:24:24 -- scripts/common.sh@365 -- # decimal 2 00:10:17.864 14:24:24 -- scripts/common.sh@352 -- # local d=2 00:10:17.864 14:24:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.864 14:24:24 -- scripts/common.sh@354 -- # echo 2 00:10:17.864 14:24:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:17.864 14:24:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:17.864 14:24:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:17.864 14:24:24 -- scripts/common.sh@367 -- # return 0 00:10:17.864 14:24:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.864 14:24:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.865 --rc genhtml_branch_coverage=1 00:10:17.865 --rc genhtml_function_coverage=1 00:10:17.865 --rc genhtml_legend=1 00:10:17.865 --rc geninfo_all_blocks=1 00:10:17.865 --rc geninfo_unexecuted_blocks=1 00:10:17.865 00:10:17.865 ' 00:10:17.865 14:24:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.865 --rc genhtml_branch_coverage=1 00:10:17.865 --rc genhtml_function_coverage=1 00:10:17.865 --rc genhtml_legend=1 00:10:17.865 --rc geninfo_all_blocks=1 00:10:17.865 --rc geninfo_unexecuted_blocks=1 00:10:17.865 00:10:17.865 ' 00:10:17.865 14:24:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.865 --rc genhtml_branch_coverage=1 00:10:17.865 --rc genhtml_function_coverage=1 00:10:17.865 --rc genhtml_legend=1 00:10:17.865 --rc geninfo_all_blocks=1 00:10:17.865 --rc geninfo_unexecuted_blocks=1 00:10:17.865 00:10:17.865 ' 00:10:17.865 14:24:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.865 --rc genhtml_branch_coverage=1 00:10:17.865 --rc genhtml_function_coverage=1 00:10:17.865 --rc genhtml_legend=1 00:10:17.865 --rc geninfo_all_blocks=1 00:10:17.865 --rc geninfo_unexecuted_blocks=1 00:10:17.865 00:10:17.865 ' 00:10:17.865 14:24:24 -- app/version.sh@17 -- # get_header_version major 00:10:17.865 14:24:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.865 14:24:24 -- app/version.sh@14 -- # cut -f2 00:10:17.865 14:24:24 -- app/version.sh@14 -- # tr -d '"' 00:10:17.865 14:24:24 -- app/version.sh@17 -- # major=24 00:10:17.865 14:24:24 -- app/version.sh@18 -- # get_header_version minor 00:10:17.865 14:24:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.865 14:24:24 -- app/version.sh@14 -- # cut -f2 00:10:17.865 14:24:24 -- app/version.sh@14 -- # tr -d '"' 00:10:17.865 14:24:24 -- app/version.sh@18 -- # minor=1 00:10:17.865 14:24:24 -- app/version.sh@19 -- # get_header_version patch 00:10:17.865 14:24:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.865 14:24:24 -- app/version.sh@14 -- # cut -f2 00:10:17.865 14:24:24 -- app/version.sh@14 -- # tr -d '"' 00:10:17.865 14:24:24 -- app/version.sh@19 -- # patch=1 00:10:17.865 14:24:24 -- app/version.sh@20 -- # get_header_version suffix 00:10:17.865 14:24:24 -- app/version.sh@14 -- # cut -f2 00:10:17.865 14:24:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.865 14:24:24 -- app/version.sh@14 -- # tr -d '"' 00:10:17.865 14:24:24 -- app/version.sh@20 -- # suffix=-pre 00:10:17.865 14:24:24 -- app/version.sh@22 -- # version=24.1 00:10:17.865 14:24:24 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:17.865 14:24:24 -- app/version.sh@25 -- # version=24.1.1 00:10:17.865 14:24:24 -- app/version.sh@28 -- # version=24.1.1rc0 00:10:17.865 14:24:24 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:17.865 14:24:24 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:17.865 14:24:24 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:10:17.865 14:24:24 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:10:17.865 00:10:17.865 real 0m0.248s 00:10:17.865 user 0m0.150s 00:10:17.865 sys 0m0.134s 00:10:17.865 14:24:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.865 14:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:17.865 ************************************ 00:10:17.865 END TEST version 00:10:17.865 ************************************ 00:10:17.865 14:24:24 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@191 -- # uname -s 00:10:17.865 14:24:24 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:10:17.865 14:24:24 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:10:17.865 14:24:24 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:10:17.865 14:24:24 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@255 -- # timing_exit lib 00:10:17.865 14:24:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:17.865 14:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:17.865 14:24:24 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:10:17.865 14:24:24 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:10:17.865 14:24:24 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:17.865 14:24:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:17.865 14:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.865 14:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.125 ************************************ 00:10:18.125 START TEST nvmf_tcp 00:10:18.125 ************************************ 00:10:18.125 14:24:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:18.125 * Looking for test storage... 00:10:18.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:18.125 14:24:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.125 14:24:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.125 14:24:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.125 14:24:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.125 14:24:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.125 14:24:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.125 14:24:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.125 14:24:25 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.125 14:24:25 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.125 14:24:25 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.125 14:24:25 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.125 14:24:25 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.125 14:24:25 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.125 14:24:25 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.125 14:24:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.125 14:24:25 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.125 14:24:25 -- scripts/common.sh@344 -- # : 1 00:10:18.125 14:24:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.125 14:24:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.125 14:24:25 -- scripts/common.sh@364 -- # decimal 1 00:10:18.125 14:24:25 -- scripts/common.sh@352 -- # local d=1 00:10:18.125 14:24:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.125 14:24:25 -- scripts/common.sh@354 -- # echo 1 00:10:18.125 14:24:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.125 14:24:25 -- scripts/common.sh@365 -- # decimal 2 00:10:18.125 14:24:25 -- scripts/common.sh@352 -- # local d=2 00:10:18.125 14:24:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.125 14:24:25 -- scripts/common.sh@354 -- # echo 2 00:10:18.125 14:24:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.125 14:24:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.125 14:24:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.125 14:24:25 -- scripts/common.sh@367 -- # return 0 00:10:18.125 14:24:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.125 14:24:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.125 --rc genhtml_branch_coverage=1 00:10:18.125 --rc genhtml_function_coverage=1 00:10:18.125 --rc genhtml_legend=1 00:10:18.125 --rc geninfo_all_blocks=1 00:10:18.125 --rc geninfo_unexecuted_blocks=1 00:10:18.125 00:10:18.125 ' 00:10:18.125 14:24:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.125 --rc genhtml_branch_coverage=1 00:10:18.125 --rc genhtml_function_coverage=1 00:10:18.125 --rc genhtml_legend=1 00:10:18.125 --rc geninfo_all_blocks=1 00:10:18.125 --rc geninfo_unexecuted_blocks=1 00:10:18.125 00:10:18.125 ' 00:10:18.125 14:24:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.125 --rc genhtml_branch_coverage=1 00:10:18.125 --rc genhtml_function_coverage=1 00:10:18.125 --rc genhtml_legend=1 00:10:18.125 --rc geninfo_all_blocks=1 00:10:18.125 --rc geninfo_unexecuted_blocks=1 00:10:18.125 00:10:18.125 ' 00:10:18.125 14:24:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.125 --rc genhtml_branch_coverage=1 00:10:18.125 --rc genhtml_function_coverage=1 00:10:18.125 --rc genhtml_legend=1 00:10:18.125 --rc geninfo_all_blocks=1 00:10:18.125 --rc geninfo_unexecuted_blocks=1 00:10:18.125 00:10:18.125 ' 00:10:18.125 14:24:25 -- nvmf/nvmf.sh@10 -- # uname -s 00:10:18.125 14:24:25 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:18.125 14:24:25 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.125 14:24:25 -- nvmf/common.sh@7 -- # uname -s 00:10:18.125 14:24:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.125 14:24:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.125 14:24:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.125 14:24:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.125 14:24:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.125 14:24:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.125 14:24:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.125 14:24:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.125 14:24:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.125 14:24:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.125 14:24:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:10:18.125 14:24:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:10:18.125 14:24:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.125 14:24:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.125 14:24:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.125 14:24:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.125 14:24:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.125 14:24:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.125 14:24:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.125 14:24:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.125 14:24:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.125 14:24:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.125 14:24:25 -- paths/export.sh@5 -- # export PATH 00:10:18.125 14:24:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.125 14:24:25 -- nvmf/common.sh@46 -- # : 0 00:10:18.125 14:24:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.125 14:24:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.125 14:24:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.125 14:24:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.125 14:24:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.125 14:24:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.125 14:24:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.125 14:24:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.126 14:24:25 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:18.126 14:24:25 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:18.126 14:24:25 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:18.126 14:24:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.126 14:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:18.126 14:24:25 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:10:18.126 14:24:25 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.126 14:24:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:18.126 14:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.126 14:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:18.126 ************************************ 00:10:18.126 START TEST nvmf_example 00:10:18.126 ************************************ 00:10:18.126 14:24:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.383 * Looking for test storage... 00:10:18.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.383 14:24:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.383 14:24:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.383 14:24:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.383 14:24:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.383 14:24:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.383 14:24:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.383 14:24:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.383 14:24:25 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.383 14:24:25 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.383 14:24:25 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.383 14:24:25 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.383 14:24:25 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.383 14:24:25 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.383 14:24:25 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.383 14:24:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.383 14:24:25 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.383 14:24:25 -- scripts/common.sh@344 -- # : 1 00:10:18.383 14:24:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.383 14:24:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.383 14:24:25 -- scripts/common.sh@364 -- # decimal 1 00:10:18.383 14:24:25 -- scripts/common.sh@352 -- # local d=1 00:10:18.383 14:24:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.383 14:24:25 -- scripts/common.sh@354 -- # echo 1 00:10:18.383 14:24:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.383 14:24:25 -- scripts/common.sh@365 -- # decimal 2 00:10:18.383 14:24:25 -- scripts/common.sh@352 -- # local d=2 00:10:18.383 14:24:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.383 14:24:25 -- scripts/common.sh@354 -- # echo 2 00:10:18.383 14:24:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.383 14:24:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.383 14:24:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.383 14:24:25 -- scripts/common.sh@367 -- # return 0 00:10:18.383 14:24:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.383 14:24:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.383 --rc genhtml_branch_coverage=1 00:10:18.383 --rc genhtml_function_coverage=1 00:10:18.383 --rc genhtml_legend=1 00:10:18.383 --rc geninfo_all_blocks=1 00:10:18.383 --rc geninfo_unexecuted_blocks=1 00:10:18.383 00:10:18.383 ' 00:10:18.383 14:24:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.383 --rc genhtml_branch_coverage=1 00:10:18.383 --rc genhtml_function_coverage=1 00:10:18.383 --rc genhtml_legend=1 00:10:18.383 --rc geninfo_all_blocks=1 00:10:18.383 --rc geninfo_unexecuted_blocks=1 00:10:18.383 00:10:18.383 ' 00:10:18.383 14:24:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.383 --rc genhtml_branch_coverage=1 00:10:18.383 --rc genhtml_function_coverage=1 00:10:18.383 --rc genhtml_legend=1 00:10:18.383 --rc geninfo_all_blocks=1 00:10:18.383 --rc geninfo_unexecuted_blocks=1 00:10:18.383 00:10:18.383 ' 00:10:18.383 14:24:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.383 --rc genhtml_branch_coverage=1 00:10:18.383 --rc genhtml_function_coverage=1 00:10:18.383 --rc genhtml_legend=1 00:10:18.383 --rc geninfo_all_blocks=1 00:10:18.383 --rc geninfo_unexecuted_blocks=1 00:10:18.383 00:10:18.383 ' 00:10:18.383 14:24:25 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.383 14:24:25 -- nvmf/common.sh@7 -- # uname -s 00:10:18.383 14:24:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.383 14:24:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.383 14:24:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.383 14:24:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.383 14:24:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.383 14:24:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.383 14:24:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.383 14:24:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.383 14:24:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.383 14:24:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.383 14:24:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:10:18.383 14:24:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:10:18.383 14:24:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.383 14:24:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.383 14:24:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.383 14:24:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.383 14:24:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.383 14:24:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.383 14:24:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.383 14:24:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.383 14:24:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.383 14:24:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.383 14:24:25 -- paths/export.sh@5 -- # export PATH 00:10:18.383 14:24:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.383 14:24:25 -- nvmf/common.sh@46 -- # : 0 00:10:18.383 14:24:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.383 14:24:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.383 14:24:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.383 14:24:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.384 14:24:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.384 14:24:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.384 14:24:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.384 14:24:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.384 14:24:25 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:18.384 14:24:25 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:18.384 14:24:25 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:18.384 14:24:25 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:18.384 14:24:25 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:18.384 14:24:25 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:18.384 14:24:25 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:18.384 14:24:25 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:18.384 14:24:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.384 14:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:18.384 14:24:25 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:18.384 14:24:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:18.384 14:24:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.384 14:24:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:18.384 14:24:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:18.384 14:24:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:18.384 14:24:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.384 14:24:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.384 14:24:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.384 14:24:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:18.384 14:24:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:18.384 14:24:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:18.384 14:24:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:18.384 14:24:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:18.384 14:24:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:18.384 14:24:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.384 14:24:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.384 14:24:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.384 14:24:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:18.384 14:24:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.384 14:24:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.384 14:24:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.384 14:24:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.384 14:24:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.384 14:24:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.384 14:24:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.384 14:24:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.384 14:24:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:18.384 Cannot find device "nvmf_init_br" 00:10:18.384 14:24:25 -- nvmf/common.sh@153 -- # true 00:10:18.384 14:24:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:18.384 Cannot find device "nvmf_tgt_br" 00:10:18.384 14:24:25 -- nvmf/common.sh@154 -- # true 00:10:18.384 14:24:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.384 Cannot find device "nvmf_tgt_br2" 00:10:18.384 14:24:25 -- nvmf/common.sh@155 -- # true 00:10:18.384 14:24:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:18.384 Cannot find device "nvmf_init_br" 00:10:18.384 14:24:25 -- nvmf/common.sh@156 -- # true 00:10:18.384 14:24:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:18.640 Cannot find device "nvmf_tgt_br" 00:10:18.640 14:24:25 -- nvmf/common.sh@157 -- # true 00:10:18.640 14:24:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:18.640 Cannot find device "nvmf_tgt_br2" 00:10:18.640 14:24:25 -- nvmf/common.sh@158 -- # true 00:10:18.640 14:24:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:18.640 Cannot find device "nvmf_br" 00:10:18.640 14:24:25 -- nvmf/common.sh@159 -- # true 00:10:18.640 14:24:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:18.640 Cannot find device "nvmf_init_if" 00:10:18.640 14:24:25 -- nvmf/common.sh@160 -- # true 00:10:18.640 14:24:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.640 14:24:25 -- nvmf/common.sh@161 -- # true 00:10:18.640 14:24:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.640 14:24:25 -- nvmf/common.sh@162 -- # true 00:10:18.640 14:24:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.640 14:24:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.640 14:24:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.640 14:24:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.640 14:24:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.640 14:24:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.640 14:24:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.640 14:24:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.640 14:24:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.640 14:24:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:18.640 14:24:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:18.640 14:24:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:18.640 14:24:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:18.640 14:24:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.640 14:24:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.641 14:24:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.641 14:24:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:18.641 14:24:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:18.641 14:24:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.641 14:24:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.911 14:24:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.911 14:24:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.911 14:24:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.911 14:24:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:18.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:10:18.911 00:10:18.911 --- 10.0.0.2 ping statistics --- 00:10:18.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.911 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:10:18.911 14:24:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:18.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:18.911 00:10:18.911 --- 10.0.0.3 ping statistics --- 00:10:18.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.911 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:18.911 14:24:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:10:18.911 00:10:18.911 --- 10.0.0.1 ping statistics --- 00:10:18.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.911 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:18.911 14:24:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.911 14:24:25 -- nvmf/common.sh@421 -- # return 0 00:10:18.911 14:24:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:18.911 14:24:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.911 14:24:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:18.911 14:24:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:18.911 14:24:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.911 14:24:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:18.911 14:24:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:18.911 14:24:25 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:18.911 14:24:25 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:18.911 14:24:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.911 14:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:18.911 14:24:25 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:18.911 14:24:25 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:18.911 14:24:25 -- target/nvmf_example.sh@34 -- # nvmfpid=60532 00:10:18.911 14:24:25 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:18.911 14:24:25 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:18.911 14:24:25 -- target/nvmf_example.sh@36 -- # waitforlisten 60532 00:10:18.911 14:24:25 -- common/autotest_common.sh@829 -- # '[' -z 60532 ']' 00:10:18.911 14:24:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.911 14:24:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.911 14:24:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.911 14:24:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.911 14:24:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.286 14:24:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.286 14:24:26 -- common/autotest_common.sh@862 -- # return 0 00:10:20.286 14:24:26 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:20.286 14:24:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.286 14:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.286 14:24:26 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.286 14:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.286 14:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.286 14:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.286 14:24:26 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:20.286 14:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.287 14:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 14:24:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.287 14:24:27 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:20.287 14:24:27 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.287 14:24:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.287 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 14:24:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.287 14:24:27 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:20.287 14:24:27 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.287 14:24:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.287 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 14:24:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.287 14:24:27 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.287 14:24:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.287 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.287 14:24:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.287 14:24:27 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:10:20.287 14:24:27 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:32.484 Initializing NVMe Controllers 00:10:32.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:32.484 Initialization complete. Launching workers. 00:10:32.484 ======================================================== 00:10:32.484 Latency(us) 00:10:32.484 Device Information : IOPS MiB/s Average min max 00:10:32.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15014.80 58.65 4261.96 872.38 20209.09 00:10:32.484 ======================================================== 00:10:32.484 Total : 15014.80 58.65 4261.96 872.38 20209.09 00:10:32.484 00:10:32.484 14:24:37 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:32.485 14:24:37 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:32.485 14:24:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:32.485 14:24:37 -- nvmf/common.sh@116 -- # sync 00:10:32.485 14:24:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:32.485 14:24:37 -- nvmf/common.sh@119 -- # set +e 00:10:32.485 14:24:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:32.485 14:24:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:32.485 rmmod nvme_tcp 00:10:32.485 rmmod nvme_fabrics 00:10:32.485 rmmod nvme_keyring 00:10:32.485 14:24:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:32.485 14:24:37 -- nvmf/common.sh@123 -- # set -e 00:10:32.485 14:24:37 -- nvmf/common.sh@124 -- # return 0 00:10:32.485 14:24:37 -- nvmf/common.sh@477 -- # '[' -n 60532 ']' 00:10:32.485 14:24:37 -- nvmf/common.sh@478 -- # killprocess 60532 00:10:32.485 14:24:37 -- common/autotest_common.sh@936 -- # '[' -z 60532 ']' 00:10:32.485 14:24:37 -- common/autotest_common.sh@940 -- # kill -0 60532 00:10:32.485 14:24:37 -- common/autotest_common.sh@941 -- # uname 00:10:32.485 14:24:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:32.485 14:24:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60532 00:10:32.485 14:24:37 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:10:32.485 killing process with pid 60532 00:10:32.485 14:24:37 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:10:32.485 14:24:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60532' 00:10:32.485 14:24:37 -- common/autotest_common.sh@955 -- # kill 60532 00:10:32.485 14:24:37 -- common/autotest_common.sh@960 -- # wait 60532 00:10:32.485 nvmf threads initialize successfully 00:10:32.485 bdev subsystem init successfully 00:10:32.485 created a nvmf target service 00:10:32.485 create targets's poll groups done 00:10:32.485 all subsystems of target started 00:10:32.485 nvmf target is running 00:10:32.485 all subsystems of target stopped 00:10:32.485 destroy targets's poll groups done 00:10:32.485 destroyed the nvmf target service 00:10:32.485 bdev subsystem finish successfully 00:10:32.485 nvmf threads destroy successfully 00:10:32.485 14:24:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:32.485 14:24:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:32.485 14:24:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:32.485 14:24:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.485 14:24:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:32.485 14:24:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.485 14:24:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.485 14:24:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.485 14:24:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:32.485 14:24:37 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:32.485 14:24:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.485 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:10:32.485 00:10:32.485 real 0m12.867s 00:10:32.485 user 0m45.460s 00:10:32.485 sys 0m2.302s 00:10:32.485 14:24:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.485 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:10:32.485 ************************************ 00:10:32.485 END TEST nvmf_example 00:10:32.485 ************************************ 00:10:32.485 14:24:37 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.485 14:24:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.485 14:24:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.485 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:10:32.485 ************************************ 00:10:32.485 START TEST nvmf_filesystem 00:10:32.485 ************************************ 00:10:32.485 14:24:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.485 * Looking for test storage... 00:10:32.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.485 14:24:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:32.485 14:24:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:32.485 14:24:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:32.485 14:24:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:32.485 14:24:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:32.485 14:24:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:32.485 14:24:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:32.485 14:24:38 -- scripts/common.sh@335 -- # IFS=.-: 00:10:32.485 14:24:38 -- scripts/common.sh@335 -- # read -ra ver1 00:10:32.485 14:24:38 -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.485 14:24:38 -- scripts/common.sh@336 -- # read -ra ver2 00:10:32.485 14:24:38 -- scripts/common.sh@337 -- # local 'op=<' 00:10:32.485 14:24:38 -- scripts/common.sh@339 -- # ver1_l=2 00:10:32.485 14:24:38 -- scripts/common.sh@340 -- # ver2_l=1 00:10:32.485 14:24:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:32.485 14:24:38 -- scripts/common.sh@343 -- # case "$op" in 00:10:32.485 14:24:38 -- scripts/common.sh@344 -- # : 1 00:10:32.485 14:24:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:32.485 14:24:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.485 14:24:38 -- scripts/common.sh@364 -- # decimal 1 00:10:32.485 14:24:38 -- scripts/common.sh@352 -- # local d=1 00:10:32.485 14:24:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.485 14:24:38 -- scripts/common.sh@354 -- # echo 1 00:10:32.485 14:24:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:32.485 14:24:38 -- scripts/common.sh@365 -- # decimal 2 00:10:32.485 14:24:38 -- scripts/common.sh@352 -- # local d=2 00:10:32.485 14:24:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.485 14:24:38 -- scripts/common.sh@354 -- # echo 2 00:10:32.485 14:24:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:32.485 14:24:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:32.485 14:24:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:32.485 14:24:38 -- scripts/common.sh@367 -- # return 0 00:10:32.485 14:24:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.485 14:24:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:32.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.485 --rc genhtml_branch_coverage=1 00:10:32.485 --rc genhtml_function_coverage=1 00:10:32.485 --rc genhtml_legend=1 00:10:32.485 --rc geninfo_all_blocks=1 00:10:32.485 --rc geninfo_unexecuted_blocks=1 00:10:32.485 00:10:32.485 ' 00:10:32.485 14:24:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:32.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.485 --rc genhtml_branch_coverage=1 00:10:32.485 --rc genhtml_function_coverage=1 00:10:32.485 --rc genhtml_legend=1 00:10:32.485 --rc geninfo_all_blocks=1 00:10:32.485 --rc geninfo_unexecuted_blocks=1 00:10:32.485 00:10:32.485 ' 00:10:32.485 14:24:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:32.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.485 --rc genhtml_branch_coverage=1 00:10:32.485 --rc genhtml_function_coverage=1 00:10:32.485 --rc genhtml_legend=1 00:10:32.485 --rc geninfo_all_blocks=1 00:10:32.485 --rc geninfo_unexecuted_blocks=1 00:10:32.485 00:10:32.485 ' 00:10:32.485 14:24:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:32.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.485 --rc genhtml_branch_coverage=1 00:10:32.485 --rc genhtml_function_coverage=1 00:10:32.485 --rc genhtml_legend=1 00:10:32.485 --rc geninfo_all_blocks=1 00:10:32.485 --rc geninfo_unexecuted_blocks=1 00:10:32.485 00:10:32.485 ' 00:10:32.485 14:24:38 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:32.485 14:24:38 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:32.485 14:24:38 -- common/autotest_common.sh@34 -- # set -e 00:10:32.485 14:24:38 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:32.485 14:24:38 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:32.485 14:24:38 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:32.485 14:24:38 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:32.485 14:24:38 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:32.485 14:24:38 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:32.485 14:24:38 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:32.485 14:24:38 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:32.485 14:24:38 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:32.485 14:24:38 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:32.485 14:24:38 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:32.485 14:24:38 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:32.485 14:24:38 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:32.485 14:24:38 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:32.485 14:24:38 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:32.485 14:24:38 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:32.485 14:24:38 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:32.485 14:24:38 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:32.485 14:24:38 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:32.485 14:24:38 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:32.485 14:24:38 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:32.485 14:24:38 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:32.485 14:24:38 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:32.485 14:24:38 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:32.485 14:24:38 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:32.485 14:24:38 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:32.485 14:24:38 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:32.485 14:24:38 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:32.485 14:24:38 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:32.485 14:24:38 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:32.485 14:24:38 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:32.485 14:24:38 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:32.486 14:24:38 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:32.486 14:24:38 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:32.486 14:24:38 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:32.486 14:24:38 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:32.486 14:24:38 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:32.486 14:24:38 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:32.486 14:24:38 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:32.486 14:24:38 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:32.486 14:24:38 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:32.486 14:24:38 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:32.486 14:24:38 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:32.486 14:24:38 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:32.486 14:24:38 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:32.486 14:24:38 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:32.486 14:24:38 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:32.486 14:24:38 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:32.486 14:24:38 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:32.486 14:24:38 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:10:32.486 14:24:38 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:10:32.486 14:24:38 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:32.486 14:24:38 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:10:32.486 14:24:38 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:10:32.486 14:24:38 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:10:32.486 14:24:38 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:10:32.486 14:24:38 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:10:32.486 14:24:38 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:10:32.486 14:24:38 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:10:32.486 14:24:38 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:10:32.486 14:24:38 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:10:32.486 14:24:38 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:10:32.486 14:24:38 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:10:32.486 14:24:38 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:10:32.486 14:24:38 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:10:32.486 14:24:38 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:10:32.486 14:24:38 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:10:32.486 14:24:38 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:10:32.486 14:24:38 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:10:32.486 14:24:38 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:32.486 14:24:38 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:10:32.486 14:24:38 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:10:32.486 14:24:38 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:10:32.486 14:24:38 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:10:32.486 14:24:38 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:10:32.486 14:24:38 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:10:32.486 14:24:38 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:10:32.486 14:24:38 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:10:32.486 14:24:38 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:10:32.486 14:24:38 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:10:32.486 14:24:38 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:32.486 14:24:38 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:10:32.486 14:24:38 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:10:32.486 14:24:38 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:32.486 14:24:38 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:32.486 14:24:38 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:32.486 14:24:38 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:32.486 14:24:38 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:32.486 14:24:38 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:32.486 14:24:38 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:32.486 14:24:38 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:32.486 14:24:38 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:32.486 14:24:38 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:32.486 14:24:38 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:32.486 14:24:38 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:32.486 14:24:38 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:32.486 14:24:38 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:32.486 14:24:38 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:32.486 14:24:38 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:32.486 #define SPDK_CONFIG_H 00:10:32.486 #define SPDK_CONFIG_APPS 1 00:10:32.486 #define SPDK_CONFIG_ARCH native 00:10:32.486 #undef SPDK_CONFIG_ASAN 00:10:32.486 #define SPDK_CONFIG_AVAHI 1 00:10:32.486 #undef SPDK_CONFIG_CET 00:10:32.486 #define SPDK_CONFIG_COVERAGE 1 00:10:32.486 #define SPDK_CONFIG_CROSS_PREFIX 00:10:32.486 #undef SPDK_CONFIG_CRYPTO 00:10:32.486 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:32.486 #undef SPDK_CONFIG_CUSTOMOCF 00:10:32.486 #undef SPDK_CONFIG_DAOS 00:10:32.486 #define SPDK_CONFIG_DAOS_DIR 00:10:32.486 #define SPDK_CONFIG_DEBUG 1 00:10:32.486 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:32.486 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:32.486 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:32.486 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:32.486 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:32.486 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:32.486 #define SPDK_CONFIG_EXAMPLES 1 00:10:32.486 #undef SPDK_CONFIG_FC 00:10:32.486 #define SPDK_CONFIG_FC_PATH 00:10:32.486 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:32.486 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:32.486 #undef SPDK_CONFIG_FUSE 00:10:32.486 #undef SPDK_CONFIG_FUZZER 00:10:32.486 #define SPDK_CONFIG_FUZZER_LIB 00:10:32.486 #define SPDK_CONFIG_GOLANG 1 00:10:32.486 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:32.486 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:32.486 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:32.486 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:32.486 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:32.486 #define SPDK_CONFIG_IDXD 1 00:10:32.486 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:32.486 #undef SPDK_CONFIG_IPSEC_MB 00:10:32.486 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:32.486 #define SPDK_CONFIG_ISAL 1 00:10:32.486 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:32.486 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:32.486 #define SPDK_CONFIG_LIBDIR 00:10:32.486 #undef SPDK_CONFIG_LTO 00:10:32.486 #define SPDK_CONFIG_MAX_LCORES 00:10:32.486 #define SPDK_CONFIG_NVME_CUSE 1 00:10:32.486 #undef SPDK_CONFIG_OCF 00:10:32.486 #define SPDK_CONFIG_OCF_PATH 00:10:32.486 #define SPDK_CONFIG_OPENSSL_PATH 00:10:32.486 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:32.486 #undef SPDK_CONFIG_PGO_USE 00:10:32.486 #define SPDK_CONFIG_PREFIX /usr/local 00:10:32.486 #undef SPDK_CONFIG_RAID5F 00:10:32.486 #undef SPDK_CONFIG_RBD 00:10:32.486 #define SPDK_CONFIG_RDMA 1 00:10:32.486 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:32.486 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:32.486 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:32.486 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:32.486 #define SPDK_CONFIG_SHARED 1 00:10:32.486 #undef SPDK_CONFIG_SMA 00:10:32.486 #define SPDK_CONFIG_TESTS 1 00:10:32.486 #undef SPDK_CONFIG_TSAN 00:10:32.486 #define SPDK_CONFIG_UBLK 1 00:10:32.486 #define SPDK_CONFIG_UBSAN 1 00:10:32.486 #undef SPDK_CONFIG_UNIT_TESTS 00:10:32.486 #undef SPDK_CONFIG_URING 00:10:32.486 #define SPDK_CONFIG_URING_PATH 00:10:32.486 #undef SPDK_CONFIG_URING_ZNS 00:10:32.486 #define SPDK_CONFIG_USDT 1 00:10:32.486 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:32.486 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:32.486 #define SPDK_CONFIG_VFIO_USER 1 00:10:32.486 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:32.486 #define SPDK_CONFIG_VHOST 1 00:10:32.486 #define SPDK_CONFIG_VIRTIO 1 00:10:32.486 #undef SPDK_CONFIG_VTUNE 00:10:32.486 #define SPDK_CONFIG_VTUNE_DIR 00:10:32.486 #define SPDK_CONFIG_WERROR 1 00:10:32.486 #define SPDK_CONFIG_WPDK_DIR 00:10:32.486 #undef SPDK_CONFIG_XNVME 00:10:32.486 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:32.486 14:24:38 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:32.486 14:24:38 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.486 14:24:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.486 14:24:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.486 14:24:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.486 14:24:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.486 14:24:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.486 14:24:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.486 14:24:38 -- paths/export.sh@5 -- # export PATH 00:10:32.487 14:24:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.487 14:24:38 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:32.487 14:24:38 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:32.487 14:24:38 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:32.487 14:24:38 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:32.487 14:24:38 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:32.487 14:24:38 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:32.487 14:24:38 -- pm/common@16 -- # TEST_TAG=N/A 00:10:32.487 14:24:38 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:32.487 14:24:38 -- common/autotest_common.sh@52 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:10:32.487 14:24:38 -- common/autotest_common.sh@56 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:32.487 14:24:38 -- common/autotest_common.sh@58 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:10:32.487 14:24:38 -- common/autotest_common.sh@60 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:32.487 14:24:38 -- common/autotest_common.sh@62 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:10:32.487 14:24:38 -- common/autotest_common.sh@64 -- # : 00:10:32.487 14:24:38 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:10:32.487 14:24:38 -- common/autotest_common.sh@66 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:10:32.487 14:24:38 -- common/autotest_common.sh@68 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:10:32.487 14:24:38 -- common/autotest_common.sh@70 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:10:32.487 14:24:38 -- common/autotest_common.sh@72 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:32.487 14:24:38 -- common/autotest_common.sh@74 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:10:32.487 14:24:38 -- common/autotest_common.sh@76 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:10:32.487 14:24:38 -- common/autotest_common.sh@78 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:10:32.487 14:24:38 -- common/autotest_common.sh@80 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:10:32.487 14:24:38 -- common/autotest_common.sh@82 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:10:32.487 14:24:38 -- common/autotest_common.sh@84 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:10:32.487 14:24:38 -- common/autotest_common.sh@86 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:10:32.487 14:24:38 -- common/autotest_common.sh@88 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:10:32.487 14:24:38 -- common/autotest_common.sh@90 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:32.487 14:24:38 -- common/autotest_common.sh@92 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:10:32.487 14:24:38 -- common/autotest_common.sh@94 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:10:32.487 14:24:38 -- common/autotest_common.sh@96 -- # : tcp 00:10:32.487 14:24:38 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:32.487 14:24:38 -- common/autotest_common.sh@98 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:10:32.487 14:24:38 -- common/autotest_common.sh@100 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:10:32.487 14:24:38 -- common/autotest_common.sh@102 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:10:32.487 14:24:38 -- common/autotest_common.sh@104 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:10:32.487 14:24:38 -- common/autotest_common.sh@106 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:10:32.487 14:24:38 -- common/autotest_common.sh@108 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:10:32.487 14:24:38 -- common/autotest_common.sh@110 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:10:32.487 14:24:38 -- common/autotest_common.sh@112 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:32.487 14:24:38 -- common/autotest_common.sh@114 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:10:32.487 14:24:38 -- common/autotest_common.sh@116 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:10:32.487 14:24:38 -- common/autotest_common.sh@118 -- # : 00:10:32.487 14:24:38 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:32.487 14:24:38 -- common/autotest_common.sh@120 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:10:32.487 14:24:38 -- common/autotest_common.sh@122 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:10:32.487 14:24:38 -- common/autotest_common.sh@124 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:10:32.487 14:24:38 -- common/autotest_common.sh@126 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:10:32.487 14:24:38 -- common/autotest_common.sh@128 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:10:32.487 14:24:38 -- common/autotest_common.sh@130 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:10:32.487 14:24:38 -- common/autotest_common.sh@132 -- # : 00:10:32.487 14:24:38 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:10:32.487 14:24:38 -- common/autotest_common.sh@134 -- # : true 00:10:32.487 14:24:38 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:10:32.487 14:24:38 -- common/autotest_common.sh@136 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:10:32.487 14:24:38 -- common/autotest_common.sh@138 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:10:32.487 14:24:38 -- common/autotest_common.sh@140 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:10:32.487 14:24:38 -- common/autotest_common.sh@142 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:10:32.487 14:24:38 -- common/autotest_common.sh@144 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:10:32.487 14:24:38 -- common/autotest_common.sh@146 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:10:32.487 14:24:38 -- common/autotest_common.sh@148 -- # : 00:10:32.487 14:24:38 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:10:32.487 14:24:38 -- common/autotest_common.sh@150 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:10:32.487 14:24:38 -- common/autotest_common.sh@152 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:10:32.487 14:24:38 -- common/autotest_common.sh@154 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:10:32.487 14:24:38 -- common/autotest_common.sh@156 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:10:32.487 14:24:38 -- common/autotest_common.sh@158 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:10:32.487 14:24:38 -- common/autotest_common.sh@160 -- # : 0 00:10:32.487 14:24:38 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:10:32.487 14:24:38 -- common/autotest_common.sh@163 -- # : 00:10:32.487 14:24:38 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:10:32.487 14:24:38 -- common/autotest_common.sh@165 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:10:32.487 14:24:38 -- common/autotest_common.sh@167 -- # : 1 00:10:32.487 14:24:38 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:32.487 14:24:38 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.487 14:24:38 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.487 14:24:38 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.487 14:24:38 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:32.487 14:24:38 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:32.488 14:24:38 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:32.488 14:24:38 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:10:32.488 14:24:38 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.488 14:24:38 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.488 14:24:38 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.488 14:24:38 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.488 14:24:38 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:32.488 14:24:38 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:10:32.488 14:24:38 -- common/autotest_common.sh@196 -- # cat 00:10:32.488 14:24:38 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:10:32.488 14:24:38 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.488 14:24:38 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.488 14:24:38 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.488 14:24:38 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.488 14:24:38 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:10:32.488 14:24:38 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:10:32.488 14:24:38 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:32.488 14:24:38 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:32.488 14:24:38 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:32.488 14:24:38 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:32.488 14:24:38 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.488 14:24:38 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.488 14:24:38 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.488 14:24:38 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.488 14:24:38 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:32.488 14:24:38 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:32.488 14:24:38 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.488 14:24:38 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.488 14:24:38 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:10:32.488 14:24:38 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:10:32.488 14:24:38 -- common/autotest_common.sh@249 -- # _LCOV= 00:10:32.488 14:24:38 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:32.488 14:24:38 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:10:32.488 14:24:38 -- common/autotest_common.sh@255 -- # lcov_opt= 00:10:32.488 14:24:38 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:10:32.488 14:24:38 -- common/autotest_common.sh@259 -- # export valgrind= 00:10:32.488 14:24:38 -- common/autotest_common.sh@259 -- # valgrind= 00:10:32.488 14:24:38 -- common/autotest_common.sh@265 -- # uname -s 00:10:32.488 14:24:38 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:10:32.488 14:24:38 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:10:32.488 14:24:38 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:10:32.488 14:24:38 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:10:32.488 14:24:38 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@275 -- # MAKE=make 00:10:32.488 14:24:38 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:10:32.488 14:24:38 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:10:32.488 14:24:38 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:10:32.488 14:24:38 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:32.488 14:24:38 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:10:32.488 14:24:38 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:10:32.488 14:24:38 -- common/autotest_common.sh@301 -- # for i in "$@" 00:10:32.488 14:24:38 -- common/autotest_common.sh@302 -- # case "$i" in 00:10:32.488 14:24:38 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:10:32.488 14:24:38 -- common/autotest_common.sh@319 -- # [[ -z 60781 ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@319 -- # kill -0 60781 00:10:32.488 14:24:38 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:10:32.488 14:24:38 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:10:32.488 14:24:38 -- common/autotest_common.sh@332 -- # local mount target_dir 00:10:32.488 14:24:38 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:10:32.488 14:24:38 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:10:32.488 14:24:38 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:10:32.488 14:24:38 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:10:32.488 14:24:38 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.6j2N9K 00:10:32.488 14:24:38 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:32.488 14:24:38 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:10:32.488 14:24:38 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.6j2N9K/tests/target /tmp/spdk.6j2N9K 00:10:32.488 14:24:38 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@328 -- # df -T 00:10:32.488 14:24:38 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=14017265664 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=5550325760 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=14017265664 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=5550325760 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266294272 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=135168 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:10:32.488 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:10:32.488 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:10:32.488 14:24:38 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:10:32.488 14:24:38 -- common/autotest_common.sh@363 -- # avails["$mount"]=93491900416 00:10:32.489 14:24:38 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:10:32.489 14:24:38 -- common/autotest_common.sh@364 -- # uses["$mount"]=6210879488 00:10:32.489 14:24:38 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:10:32.489 14:24:38 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:10:32.489 * Looking for test storage... 00:10:32.489 14:24:38 -- common/autotest_common.sh@369 -- # local target_space new_size 00:10:32.489 14:24:38 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:10:32.489 14:24:38 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:32.489 14:24:38 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.489 14:24:38 -- common/autotest_common.sh@373 -- # mount=/home 00:10:32.489 14:24:38 -- common/autotest_common.sh@375 -- # target_space=14017265664 00:10:32.489 14:24:38 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:10:32.489 14:24:38 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:10:32.489 14:24:38 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:10:32.489 14:24:38 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:10:32.489 14:24:38 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:10:32.489 14:24:38 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.489 14:24:38 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.489 14:24:38 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.489 14:24:38 -- common/autotest_common.sh@390 -- # return 0 00:10:32.489 14:24:38 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:10:32.489 14:24:38 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:10:32.489 14:24:38 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:32.489 14:24:38 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:32.489 14:24:38 -- common/autotest_common.sh@1682 -- # true 00:10:32.489 14:24:38 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:10:32.489 14:24:38 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:32.489 14:24:38 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:32.489 14:24:38 -- common/autotest_common.sh@27 -- # exec 00:10:32.489 14:24:38 -- common/autotest_common.sh@29 -- # exec 00:10:32.489 14:24:38 -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:32.489 14:24:38 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:32.489 14:24:38 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:32.489 14:24:38 -- common/autotest_common.sh@18 -- # set -x 00:10:32.489 14:24:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:32.489 14:24:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:32.489 14:24:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:32.489 14:24:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:32.489 14:24:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:32.489 14:24:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:32.489 14:24:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:32.489 14:24:38 -- scripts/common.sh@335 -- # IFS=.-: 00:10:32.489 14:24:38 -- scripts/common.sh@335 -- # read -ra ver1 00:10:32.489 14:24:38 -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.489 14:24:38 -- scripts/common.sh@336 -- # read -ra ver2 00:10:32.489 14:24:38 -- scripts/common.sh@337 -- # local 'op=<' 00:10:32.489 14:24:38 -- scripts/common.sh@339 -- # ver1_l=2 00:10:32.489 14:24:38 -- scripts/common.sh@340 -- # ver2_l=1 00:10:32.489 14:24:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:32.489 14:24:38 -- scripts/common.sh@343 -- # case "$op" in 00:10:32.489 14:24:38 -- scripts/common.sh@344 -- # : 1 00:10:32.489 14:24:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:32.489 14:24:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.489 14:24:38 -- scripts/common.sh@364 -- # decimal 1 00:10:32.489 14:24:38 -- scripts/common.sh@352 -- # local d=1 00:10:32.489 14:24:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.489 14:24:38 -- scripts/common.sh@354 -- # echo 1 00:10:32.489 14:24:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:32.489 14:24:38 -- scripts/common.sh@365 -- # decimal 2 00:10:32.489 14:24:38 -- scripts/common.sh@352 -- # local d=2 00:10:32.489 14:24:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.489 14:24:38 -- scripts/common.sh@354 -- # echo 2 00:10:32.489 14:24:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:32.489 14:24:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:32.489 14:24:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:32.489 14:24:38 -- scripts/common.sh@367 -- # return 0 00:10:32.489 14:24:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.489 14:24:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:32.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.489 --rc genhtml_branch_coverage=1 00:10:32.489 --rc genhtml_function_coverage=1 00:10:32.489 --rc genhtml_legend=1 00:10:32.489 --rc geninfo_all_blocks=1 00:10:32.489 --rc geninfo_unexecuted_blocks=1 00:10:32.489 00:10:32.489 ' 00:10:32.489 14:24:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:32.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.489 --rc genhtml_branch_coverage=1 00:10:32.489 --rc genhtml_function_coverage=1 00:10:32.489 --rc genhtml_legend=1 00:10:32.489 --rc geninfo_all_blocks=1 00:10:32.489 --rc geninfo_unexecuted_blocks=1 00:10:32.489 00:10:32.489 ' 00:10:32.489 14:24:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:32.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.489 --rc genhtml_branch_coverage=1 00:10:32.489 --rc genhtml_function_coverage=1 00:10:32.489 --rc genhtml_legend=1 00:10:32.489 --rc geninfo_all_blocks=1 00:10:32.489 --rc geninfo_unexecuted_blocks=1 00:10:32.489 00:10:32.489 ' 00:10:32.489 14:24:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:32.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.489 --rc genhtml_branch_coverage=1 00:10:32.489 --rc genhtml_function_coverage=1 00:10:32.489 --rc genhtml_legend=1 00:10:32.489 --rc geninfo_all_blocks=1 00:10:32.489 --rc geninfo_unexecuted_blocks=1 00:10:32.489 00:10:32.489 ' 00:10:32.489 14:24:38 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.489 14:24:38 -- nvmf/common.sh@7 -- # uname -s 00:10:32.489 14:24:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.489 14:24:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.489 14:24:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.489 14:24:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.489 14:24:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.489 14:24:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.489 14:24:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.489 14:24:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.489 14:24:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.489 14:24:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.489 14:24:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:10:32.489 14:24:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:10:32.489 14:24:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.489 14:24:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.489 14:24:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.489 14:24:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.489 14:24:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.489 14:24:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.489 14:24:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.489 14:24:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.489 14:24:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.490 14:24:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.490 14:24:38 -- paths/export.sh@5 -- # export PATH 00:10:32.490 14:24:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.490 14:24:38 -- nvmf/common.sh@46 -- # : 0 00:10:32.490 14:24:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:32.490 14:24:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:32.490 14:24:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:32.490 14:24:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.490 14:24:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.490 14:24:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:32.490 14:24:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:32.490 14:24:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:32.490 14:24:38 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:32.490 14:24:38 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:32.490 14:24:38 -- target/filesystem.sh@15 -- # nvmftestinit 00:10:32.490 14:24:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:32.490 14:24:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.490 14:24:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:32.490 14:24:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:32.490 14:24:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:32.490 14:24:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.490 14:24:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.490 14:24:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.490 14:24:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:32.490 14:24:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.490 14:24:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.490 14:24:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.490 14:24:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:32.490 14:24:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.490 14:24:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.490 14:24:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.490 14:24:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.490 14:24:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.490 14:24:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.490 14:24:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.490 14:24:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.490 14:24:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:32.490 14:24:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:32.490 Cannot find device "nvmf_tgt_br" 00:10:32.490 14:24:38 -- nvmf/common.sh@154 -- # true 00:10:32.490 14:24:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.490 Cannot find device "nvmf_tgt_br2" 00:10:32.490 14:24:38 -- nvmf/common.sh@155 -- # true 00:10:32.490 14:24:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:32.490 14:24:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:32.490 Cannot find device "nvmf_tgt_br" 00:10:32.490 14:24:38 -- nvmf/common.sh@157 -- # true 00:10:32.490 14:24:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:32.490 Cannot find device "nvmf_tgt_br2" 00:10:32.490 14:24:38 -- nvmf/common.sh@158 -- # true 00:10:32.490 14:24:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:32.490 14:24:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:32.490 14:24:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.490 14:24:38 -- nvmf/common.sh@161 -- # true 00:10:32.490 14:24:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.490 14:24:38 -- nvmf/common.sh@162 -- # true 00:10:32.490 14:24:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.490 14:24:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.490 14:24:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.490 14:24:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.490 14:24:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.490 14:24:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.490 14:24:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.490 14:24:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:32.490 14:24:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:32.490 14:24:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:32.490 14:24:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:32.490 14:24:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:32.490 14:24:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:32.490 14:24:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.490 14:24:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.490 14:24:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.490 14:24:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:32.490 14:24:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:32.490 14:24:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.490 14:24:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.490 14:24:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.490 14:24:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.490 14:24:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.490 14:24:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:32.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:32.490 00:10:32.490 --- 10.0.0.2 ping statistics --- 00:10:32.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.490 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:32.490 14:24:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:32.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:10:32.490 00:10:32.490 --- 10.0.0.3 ping statistics --- 00:10:32.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.490 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:32.490 14:24:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:10:32.490 00:10:32.490 --- 10.0.0.1 ping statistics --- 00:10:32.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.490 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:32.490 14:24:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.490 14:24:38 -- nvmf/common.sh@421 -- # return 0 00:10:32.490 14:24:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:32.490 14:24:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.490 14:24:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:32.490 14:24:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.490 14:24:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:32.490 14:24:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:32.490 14:24:38 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:32.490 14:24:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.490 14:24:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.490 14:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:32.490 ************************************ 00:10:32.490 START TEST nvmf_filesystem_no_in_capsule 00:10:32.490 ************************************ 00:10:32.490 14:24:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:10:32.490 14:24:38 -- target/filesystem.sh@47 -- # in_capsule=0 00:10:32.490 14:24:38 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:32.490 14:24:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:32.490 14:24:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.490 14:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:32.490 14:24:38 -- nvmf/common.sh@469 -- # nvmfpid=60959 00:10:32.490 14:24:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.490 14:24:38 -- nvmf/common.sh@470 -- # waitforlisten 60959 00:10:32.490 14:24:38 -- common/autotest_common.sh@829 -- # '[' -z 60959 ']' 00:10:32.490 14:24:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.490 14:24:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.490 14:24:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.490 14:24:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.490 14:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:32.490 [2024-12-06 14:24:38.882146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.490 [2024-12-06 14:24:38.882290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.490 [2024-12-06 14:24:39.024614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.490 [2024-12-06 14:24:39.198729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:32.491 [2024-12-06 14:24:39.198902] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.491 [2024-12-06 14:24:39.198915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.491 [2024-12-06 14:24:39.198925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.491 [2024-12-06 14:24:39.199059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.491 [2024-12-06 14:24:39.199799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.491 [2024-12-06 14:24:39.199967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.491 [2024-12-06 14:24:39.199969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.057 14:24:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.057 14:24:39 -- common/autotest_common.sh@862 -- # return 0 00:10:33.057 14:24:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:33.057 14:24:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.057 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:10:33.057 14:24:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.057 14:24:39 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:33.057 14:24:39 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:33.057 14:24:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.057 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:10:33.057 [2024-12-06 14:24:39.979016] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.057 14:24:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.057 14:24:39 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:33.057 14:24:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.057 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:10:33.315 Malloc1 00:10:33.573 14:24:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.573 14:24:40 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.573 14:24:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.573 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:33.573 14:24:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.573 14:24:40 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.573 14:24:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.573 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:33.573 14:24:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.573 14:24:40 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.573 14:24:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.573 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:33.573 [2024-12-06 14:24:40.314525] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.573 14:24:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.573 14:24:40 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:33.573 14:24:40 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:10:33.573 14:24:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:10:33.573 14:24:40 -- common/autotest_common.sh@1369 -- # local bs 00:10:33.573 14:24:40 -- common/autotest_common.sh@1370 -- # local nb 00:10:33.573 14:24:40 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:33.573 14:24:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.573 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:33.573 14:24:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.573 14:24:40 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:10:33.573 { 00:10:33.573 "aliases": [ 00:10:33.573 "f1f99873-2969-4e76-9dd9-e4d27dd8e7c2" 00:10:33.573 ], 00:10:33.573 "assigned_rate_limits": { 00:10:33.573 "r_mbytes_per_sec": 0, 00:10:33.573 "rw_ios_per_sec": 0, 00:10:33.573 "rw_mbytes_per_sec": 0, 00:10:33.573 "w_mbytes_per_sec": 0 00:10:33.573 }, 00:10:33.573 "block_size": 512, 00:10:33.573 "claim_type": "exclusive_write", 00:10:33.573 "claimed": true, 00:10:33.573 "driver_specific": {}, 00:10:33.573 "memory_domains": [ 00:10:33.573 { 00:10:33.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.573 "dma_device_type": 2 00:10:33.573 } 00:10:33.573 ], 00:10:33.573 "name": "Malloc1", 00:10:33.573 "num_blocks": 1048576, 00:10:33.573 "product_name": "Malloc disk", 00:10:33.573 "supported_io_types": { 00:10:33.573 "abort": true, 00:10:33.573 "compare": false, 00:10:33.573 "compare_and_write": false, 00:10:33.573 "flush": true, 00:10:33.573 "nvme_admin": false, 00:10:33.573 "nvme_io": false, 00:10:33.573 "read": true, 00:10:33.573 "reset": true, 00:10:33.573 "unmap": true, 00:10:33.573 "write": true, 00:10:33.573 "write_zeroes": true 00:10:33.573 }, 00:10:33.573 "uuid": "f1f99873-2969-4e76-9dd9-e4d27dd8e7c2", 00:10:33.573 "zoned": false 00:10:33.573 } 00:10:33.573 ]' 00:10:33.573 14:24:40 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:10:33.573 14:24:40 -- common/autotest_common.sh@1372 -- # bs=512 00:10:33.573 14:24:40 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:10:33.573 14:24:40 -- common/autotest_common.sh@1373 -- # nb=1048576 00:10:33.573 14:24:40 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:10:33.573 14:24:40 -- common/autotest_common.sh@1377 -- # echo 512 00:10:33.573 14:24:40 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:33.573 14:24:40 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.831 14:24:40 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.831 14:24:40 -- common/autotest_common.sh@1187 -- # local i=0 00:10:33.831 14:24:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.831 14:24:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:33.831 14:24:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:35.737 14:24:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:35.737 14:24:42 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.737 14:24:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:35.737 14:24:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:35.737 14:24:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.737 14:24:42 -- common/autotest_common.sh@1197 -- # return 0 00:10:35.737 14:24:42 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:35.737 14:24:42 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:35.737 14:24:42 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:35.737 14:24:42 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:35.737 14:24:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:35.737 14:24:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:35.737 14:24:42 -- setup/common.sh@80 -- # echo 536870912 00:10:35.737 14:24:42 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:35.737 14:24:42 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:35.737 14:24:42 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:35.737 14:24:42 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:35.995 14:24:42 -- target/filesystem.sh@69 -- # partprobe 00:10:35.995 14:24:42 -- target/filesystem.sh@70 -- # sleep 1 00:10:36.933 14:24:43 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:36.933 14:24:43 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.933 14:24:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:36.933 14:24:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.933 14:24:43 -- common/autotest_common.sh@10 -- # set +x 00:10:36.933 ************************************ 00:10:36.933 START TEST filesystem_ext4 00:10:36.933 ************************************ 00:10:36.933 14:24:43 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.933 14:24:43 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.933 14:24:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.933 14:24:43 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.933 14:24:43 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:10:36.933 14:24:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:36.933 14:24:43 -- common/autotest_common.sh@914 -- # local i=0 00:10:36.933 14:24:43 -- common/autotest_common.sh@915 -- # local force 00:10:36.933 14:24:43 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:10:36.933 14:24:43 -- common/autotest_common.sh@918 -- # force=-F 00:10:36.933 14:24:43 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.933 mke2fs 1.47.0 (5-Feb-2023) 00:10:37.191 Discarding device blocks: 0/522240 done 00:10:37.191 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:37.191 Filesystem UUID: 03b787e8-de33-4f49-a6e0-1a1f7e584e98 00:10:37.191 Superblock backups stored on blocks: 00:10:37.191 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:37.191 00:10:37.191 Allocating group tables: 0/64 done 00:10:37.191 Writing inode tables: 0/64 1/64 done 00:10:37.191 Creating journal (8192 blocks): done 00:10:37.191 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.191 00:10:37.191 14:24:44 -- common/autotest_common.sh@931 -- # return 0 00:10:37.191 14:24:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.474 14:24:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.474 14:24:49 -- target/filesystem.sh@25 -- # sync 00:10:42.474 14:24:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.474 14:24:49 -- target/filesystem.sh@27 -- # sync 00:10:42.733 14:24:49 -- target/filesystem.sh@29 -- # i=0 00:10:42.733 14:24:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.733 14:24:49 -- target/filesystem.sh@37 -- # kill -0 60959 00:10:42.733 14:24:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.733 14:24:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.733 14:24:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.733 14:24:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.733 00:10:42.733 real 0m5.587s 00:10:42.733 user 0m0.032s 00:10:42.733 sys 0m0.062s 00:10:42.733 14:24:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.733 14:24:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 ************************************ 00:10:42.733 END TEST filesystem_ext4 00:10:42.733 ************************************ 00:10:42.733 14:24:49 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:42.733 14:24:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:42.733 14:24:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.733 14:24:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 ************************************ 00:10:42.733 START TEST filesystem_btrfs 00:10:42.733 ************************************ 00:10:42.733 14:24:49 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:42.733 14:24:49 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:42.733 14:24:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.733 14:24:49 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:42.733 14:24:49 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:10:42.733 14:24:49 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:42.733 14:24:49 -- common/autotest_common.sh@914 -- # local i=0 00:10:42.733 14:24:49 -- common/autotest_common.sh@915 -- # local force 00:10:42.733 14:24:49 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:10:42.733 14:24:49 -- common/autotest_common.sh@920 -- # force=-f 00:10:42.733 14:24:49 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:42.991 btrfs-progs v6.8.1 00:10:42.991 See https://btrfs.readthedocs.io for more information. 00:10:42.991 00:10:42.991 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:42.991 NOTE: several default settings have changed in version 5.15, please make sure 00:10:42.991 this does not affect your deployments: 00:10:42.991 - DUP for metadata (-m dup) 00:10:42.991 - enabled no-holes (-O no-holes) 00:10:42.991 - enabled free-space-tree (-R free-space-tree) 00:10:42.991 00:10:42.991 Label: (null) 00:10:42.991 UUID: 6431ed43-53af-40ba-9486-9d7f6d1bedf4 00:10:42.991 Node size: 16384 00:10:42.991 Sector size: 4096 (CPU page size: 4096) 00:10:42.991 Filesystem size: 510.00MiB 00:10:42.991 Block group profiles: 00:10:42.991 Data: single 8.00MiB 00:10:42.991 Metadata: DUP 32.00MiB 00:10:42.991 System: DUP 8.00MiB 00:10:42.991 SSD detected: yes 00:10:42.991 Zoned device: no 00:10:42.991 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:42.991 Checksum: crc32c 00:10:42.991 Number of devices: 1 00:10:42.991 Devices: 00:10:42.991 ID SIZE PATH 00:10:42.991 1 510.00MiB /dev/nvme0n1p1 00:10:42.991 00:10:42.991 14:24:49 -- common/autotest_common.sh@931 -- # return 0 00:10:42.991 14:24:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.991 14:24:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.991 14:24:49 -- target/filesystem.sh@25 -- # sync 00:10:42.991 14:24:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.991 14:24:49 -- target/filesystem.sh@27 -- # sync 00:10:42.991 14:24:49 -- target/filesystem.sh@29 -- # i=0 00:10:42.991 14:24:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.991 14:24:49 -- target/filesystem.sh@37 -- # kill -0 60959 00:10:42.991 14:24:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.991 14:24:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.991 14:24:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.991 14:24:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.991 00:10:42.991 real 0m0.304s 00:10:42.991 user 0m0.020s 00:10:42.991 sys 0m0.071s 00:10:42.991 14:24:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.991 14:24:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 ************************************ 00:10:42.991 END TEST filesystem_btrfs 00:10:42.991 ************************************ 00:10:42.991 14:24:49 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.991 14:24:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:42.991 14:24:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.992 14:24:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.992 ************************************ 00:10:42.992 START TEST filesystem_xfs 00:10:42.992 ************************************ 00:10:42.992 14:24:49 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.992 14:24:49 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.992 14:24:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.992 14:24:49 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.992 14:24:49 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:10:42.992 14:24:49 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:42.992 14:24:49 -- common/autotest_common.sh@914 -- # local i=0 00:10:42.992 14:24:49 -- common/autotest_common.sh@915 -- # local force 00:10:42.992 14:24:49 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:10:42.992 14:24:49 -- common/autotest_common.sh@920 -- # force=-f 00:10:42.992 14:24:49 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:43.249 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:43.249 = sectsz=512 attr=2, projid32bit=1 00:10:43.249 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:43.250 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:43.250 data = bsize=4096 blocks=130560, imaxpct=25 00:10:43.250 = sunit=0 swidth=0 blks 00:10:43.250 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:43.250 log =internal log bsize=4096 blocks=16384, version=2 00:10:43.250 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:43.250 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.816 Discarding blocks...Done. 00:10:43.816 14:24:50 -- common/autotest_common.sh@931 -- # return 0 00:10:43.816 14:24:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.345 14:24:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.345 14:24:53 -- target/filesystem.sh@25 -- # sync 00:10:46.345 14:24:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.345 14:24:53 -- target/filesystem.sh@27 -- # sync 00:10:46.345 14:24:53 -- target/filesystem.sh@29 -- # i=0 00:10:46.345 14:24:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.345 14:24:53 -- target/filesystem.sh@37 -- # kill -0 60959 00:10:46.345 14:24:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.345 14:24:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.345 14:24:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.345 14:24:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.345 00:10:46.345 real 0m3.255s 00:10:46.345 user 0m0.025s 00:10:46.345 sys 0m0.056s 00:10:46.345 14:24:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.345 14:24:53 -- common/autotest_common.sh@10 -- # set +x 00:10:46.345 ************************************ 00:10:46.345 END TEST filesystem_xfs 00:10:46.345 ************************************ 00:10:46.345 14:24:53 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:46.345 14:24:53 -- target/filesystem.sh@93 -- # sync 00:10:46.345 14:24:53 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.345 14:24:53 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.345 14:24:53 -- common/autotest_common.sh@1208 -- # local i=0 00:10:46.345 14:24:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:46.345 14:24:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.345 14:24:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:46.345 14:24:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.345 14:24:53 -- common/autotest_common.sh@1220 -- # return 0 00:10:46.345 14:24:53 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.345 14:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.345 14:24:53 -- common/autotest_common.sh@10 -- # set +x 00:10:46.345 14:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.345 14:24:53 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:46.345 14:24:53 -- target/filesystem.sh@101 -- # killprocess 60959 00:10:46.345 14:24:53 -- common/autotest_common.sh@936 -- # '[' -z 60959 ']' 00:10:46.345 14:24:53 -- common/autotest_common.sh@940 -- # kill -0 60959 00:10:46.345 14:24:53 -- common/autotest_common.sh@941 -- # uname 00:10:46.345 14:24:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:46.345 14:24:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60959 00:10:46.622 14:24:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:46.622 killing process with pid 60959 00:10:46.622 14:24:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:46.622 14:24:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60959' 00:10:46.622 14:24:53 -- common/autotest_common.sh@955 -- # kill 60959 00:10:46.622 14:24:53 -- common/autotest_common.sh@960 -- # wait 60959 00:10:47.557 14:24:54 -- target/filesystem.sh@102 -- # nvmfpid= 00:10:47.557 00:10:47.557 real 0m15.447s 00:10:47.557 user 0m58.660s 00:10:47.557 sys 0m1.962s 00:10:47.557 14:24:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.557 ************************************ 00:10:47.557 14:24:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.557 END TEST nvmf_filesystem_no_in_capsule 00:10:47.557 ************************************ 00:10:47.557 14:24:54 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:47.557 14:24:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:47.557 14:24:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.557 14:24:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.557 ************************************ 00:10:47.557 START TEST nvmf_filesystem_in_capsule 00:10:47.557 ************************************ 00:10:47.557 14:24:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:10:47.557 14:24:54 -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:47.557 14:24:54 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:47.557 14:24:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:47.557 14:24:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.557 14:24:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.557 14:24:54 -- nvmf/common.sh@469 -- # nvmfpid=61342 00:10:47.557 14:24:54 -- nvmf/common.sh@470 -- # waitforlisten 61342 00:10:47.557 14:24:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.557 14:24:54 -- common/autotest_common.sh@829 -- # '[' -z 61342 ']' 00:10:47.557 14:24:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.557 14:24:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.557 14:24:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.557 14:24:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.557 14:24:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.557 [2024-12-06 14:24:54.379939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:47.557 [2024-12-06 14:24:54.380073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.557 [2024-12-06 14:24:54.517832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.815 [2024-12-06 14:24:54.676340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:47.815 [2024-12-06 14:24:54.676551] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.815 [2024-12-06 14:24:54.676567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.815 [2024-12-06 14:24:54.676577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.815 [2024-12-06 14:24:54.676765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.815 [2024-12-06 14:24:54.677565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.815 [2024-12-06 14:24:54.677650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.815 [2024-12-06 14:24:54.677661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.750 14:24:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.750 14:24:55 -- common/autotest_common.sh@862 -- # return 0 00:10:48.750 14:24:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:48.750 14:24:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.750 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 14:24:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.750 14:24:55 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:48.750 14:24:55 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:48.750 14:24:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.750 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 [2024-12-06 14:24:55.454960] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.750 14:24:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.750 14:24:55 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:48.750 14:24:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.750 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 Malloc1 00:10:49.009 14:24:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.009 14:24:55 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.009 14:24:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.009 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 14:24:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.009 14:24:55 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:49.009 14:24:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.009 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 14:24:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.009 14:24:55 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.009 14:24:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.009 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 [2024-12-06 14:24:55.758487] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.009 14:24:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.009 14:24:55 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:49.009 14:24:55 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:10:49.009 14:24:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:10:49.009 14:24:55 -- common/autotest_common.sh@1369 -- # local bs 00:10:49.009 14:24:55 -- common/autotest_common.sh@1370 -- # local nb 00:10:49.009 14:24:55 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:49.009 14:24:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.009 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 14:24:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.009 14:24:55 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:10:49.009 { 00:10:49.009 "aliases": [ 00:10:49.009 "0ac42c4f-a097-4292-940b-bfa69375d4a8" 00:10:49.009 ], 00:10:49.009 "assigned_rate_limits": { 00:10:49.009 "r_mbytes_per_sec": 0, 00:10:49.009 "rw_ios_per_sec": 0, 00:10:49.009 "rw_mbytes_per_sec": 0, 00:10:49.009 "w_mbytes_per_sec": 0 00:10:49.009 }, 00:10:49.009 "block_size": 512, 00:10:49.009 "claim_type": "exclusive_write", 00:10:49.009 "claimed": true, 00:10:49.009 "driver_specific": {}, 00:10:49.009 "memory_domains": [ 00:10:49.009 { 00:10:49.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.009 "dma_device_type": 2 00:10:49.009 } 00:10:49.009 ], 00:10:49.009 "name": "Malloc1", 00:10:49.009 "num_blocks": 1048576, 00:10:49.009 "product_name": "Malloc disk", 00:10:49.009 "supported_io_types": { 00:10:49.009 "abort": true, 00:10:49.009 "compare": false, 00:10:49.009 "compare_and_write": false, 00:10:49.009 "flush": true, 00:10:49.009 "nvme_admin": false, 00:10:49.009 "nvme_io": false, 00:10:49.009 "read": true, 00:10:49.009 "reset": true, 00:10:49.009 "unmap": true, 00:10:49.009 "write": true, 00:10:49.009 "write_zeroes": true 00:10:49.009 }, 00:10:49.009 "uuid": "0ac42c4f-a097-4292-940b-bfa69375d4a8", 00:10:49.009 "zoned": false 00:10:49.009 } 00:10:49.009 ]' 00:10:49.009 14:24:55 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:10:49.009 14:24:55 -- common/autotest_common.sh@1372 -- # bs=512 00:10:49.009 14:24:55 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:10:49.009 14:24:55 -- common/autotest_common.sh@1373 -- # nb=1048576 00:10:49.009 14:24:55 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:10:49.009 14:24:55 -- common/autotest_common.sh@1377 -- # echo 512 00:10:49.009 14:24:55 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:49.009 14:24:55 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.267 14:24:56 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.267 14:24:56 -- common/autotest_common.sh@1187 -- # local i=0 00:10:49.267 14:24:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.267 14:24:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:49.267 14:24:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:51.171 14:24:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:51.171 14:24:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:51.171 14:24:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.171 14:24:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:51.171 14:24:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.171 14:24:58 -- common/autotest_common.sh@1197 -- # return 0 00:10:51.171 14:24:58 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:51.171 14:24:58 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:51.171 14:24:58 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:51.171 14:24:58 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:51.171 14:24:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:51.171 14:24:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:51.171 14:24:58 -- setup/common.sh@80 -- # echo 536870912 00:10:51.171 14:24:58 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:51.171 14:24:58 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:51.171 14:24:58 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:51.171 14:24:58 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:51.171 14:24:58 -- target/filesystem.sh@69 -- # partprobe 00:10:51.427 14:24:58 -- target/filesystem.sh@70 -- # sleep 1 00:10:52.362 14:24:59 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:52.362 14:24:59 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:52.362 14:24:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:52.362 14:24:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.362 14:24:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.362 ************************************ 00:10:52.362 START TEST filesystem_in_capsule_ext4 00:10:52.362 ************************************ 00:10:52.362 14:24:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:52.362 14:24:59 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:52.362 14:24:59 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.362 14:24:59 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:52.362 14:24:59 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:10:52.362 14:24:59 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:52.362 14:24:59 -- common/autotest_common.sh@914 -- # local i=0 00:10:52.362 14:24:59 -- common/autotest_common.sh@915 -- # local force 00:10:52.362 14:24:59 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:10:52.362 14:24:59 -- common/autotest_common.sh@918 -- # force=-F 00:10:52.362 14:24:59 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:52.362 mke2fs 1.47.0 (5-Feb-2023) 00:10:52.619 Discarding device blocks: 0/522240 done 00:10:52.619 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:52.619 Filesystem UUID: 41f535e8-0db9-436e-895d-5d7a2c70afe6 00:10:52.619 Superblock backups stored on blocks: 00:10:52.620 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:52.620 00:10:52.620 Allocating group tables: 0/64 done 00:10:52.620 Writing inode tables: 0/64 done 00:10:52.620 Creating journal (8192 blocks): done 00:10:52.620 Writing superblocks and filesystem accounting information: 0/64 done 00:10:52.620 00:10:52.620 14:24:59 -- common/autotest_common.sh@931 -- # return 0 00:10:52.620 14:24:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:59.182 14:25:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:59.182 14:25:04 -- target/filesystem.sh@25 -- # sync 00:10:59.182 14:25:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:59.182 14:25:04 -- target/filesystem.sh@27 -- # sync 00:10:59.182 14:25:04 -- target/filesystem.sh@29 -- # i=0 00:10:59.182 14:25:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:59.182 14:25:04 -- target/filesystem.sh@37 -- # kill -0 61342 00:10:59.182 14:25:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:59.182 14:25:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:59.182 14:25:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:59.182 14:25:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:59.182 00:10:59.182 real 0m5.750s 00:10:59.182 user 0m0.024s 00:10:59.182 sys 0m0.062s 00:10:59.182 14:25:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.182 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:10:59.182 ************************************ 00:10:59.182 END TEST filesystem_in_capsule_ext4 00:10:59.182 ************************************ 00:10:59.182 14:25:05 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:59.182 14:25:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:59.182 14:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.182 14:25:05 -- common/autotest_common.sh@10 -- # set +x 00:10:59.182 ************************************ 00:10:59.182 START TEST filesystem_in_capsule_btrfs 00:10:59.182 ************************************ 00:10:59.182 14:25:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:59.182 14:25:05 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:59.182 14:25:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.182 14:25:05 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:59.182 14:25:05 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:10:59.182 14:25:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:59.182 14:25:05 -- common/autotest_common.sh@914 -- # local i=0 00:10:59.182 14:25:05 -- common/autotest_common.sh@915 -- # local force 00:10:59.182 14:25:05 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:10:59.182 14:25:05 -- common/autotest_common.sh@920 -- # force=-f 00:10:59.182 14:25:05 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:59.182 btrfs-progs v6.8.1 00:10:59.182 See https://btrfs.readthedocs.io for more information. 00:10:59.182 00:10:59.182 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:59.182 NOTE: several default settings have changed in version 5.15, please make sure 00:10:59.182 this does not affect your deployments: 00:10:59.182 - DUP for metadata (-m dup) 00:10:59.182 - enabled no-holes (-O no-holes) 00:10:59.182 - enabled free-space-tree (-R free-space-tree) 00:10:59.182 00:10:59.182 Label: (null) 00:10:59.182 UUID: dc7790b3-51c0-4b13-8d43-1ebc9f943f67 00:10:59.182 Node size: 16384 00:10:59.182 Sector size: 4096 (CPU page size: 4096) 00:10:59.182 Filesystem size: 510.00MiB 00:10:59.182 Block group profiles: 00:10:59.182 Data: single 8.00MiB 00:10:59.182 Metadata: DUP 32.00MiB 00:10:59.182 System: DUP 8.00MiB 00:10:59.182 SSD detected: yes 00:10:59.182 Zoned device: no 00:10:59.182 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:59.182 Checksum: crc32c 00:10:59.182 Number of devices: 1 00:10:59.182 Devices: 00:10:59.182 ID SIZE PATH 00:10:59.182 1 510.00MiB /dev/nvme0n1p1 00:10:59.182 00:10:59.182 14:25:05 -- common/autotest_common.sh@931 -- # return 0 00:10:59.182 14:25:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:59.182 14:25:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:59.182 14:25:05 -- target/filesystem.sh@25 -- # sync 00:10:59.182 14:25:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:59.182 14:25:05 -- target/filesystem.sh@27 -- # sync 00:10:59.182 14:25:05 -- target/filesystem.sh@29 -- # i=0 00:10:59.182 14:25:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:59.182 14:25:05 -- target/filesystem.sh@37 -- # kill -0 61342 00:10:59.182 14:25:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:59.182 14:25:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:59.182 14:25:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:59.182 14:25:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:59.182 ************************************ 00:10:59.182 END TEST filesystem_in_capsule_btrfs 00:10:59.182 ************************************ 00:10:59.182 00:10:59.182 real 0m0.236s 00:10:59.182 user 0m0.021s 00:10:59.182 sys 0m0.070s 00:10:59.182 14:25:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.182 14:25:05 -- common/autotest_common.sh@10 -- # set +x 00:10:59.182 14:25:05 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:59.182 14:25:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:59.182 14:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.182 14:25:05 -- common/autotest_common.sh@10 -- # set +x 00:10:59.182 ************************************ 00:10:59.182 START TEST filesystem_in_capsule_xfs 00:10:59.182 ************************************ 00:10:59.182 14:25:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:10:59.182 14:25:05 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:59.182 14:25:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.182 14:25:05 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:59.182 14:25:05 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:10:59.182 14:25:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:59.182 14:25:05 -- common/autotest_common.sh@914 -- # local i=0 00:10:59.182 14:25:05 -- common/autotest_common.sh@915 -- # local force 00:10:59.182 14:25:05 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:10:59.182 14:25:05 -- common/autotest_common.sh@920 -- # force=-f 00:10:59.182 14:25:05 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:59.182 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:59.182 = sectsz=512 attr=2, projid32bit=1 00:10:59.182 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:59.182 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:59.182 data = bsize=4096 blocks=130560, imaxpct=25 00:10:59.182 = sunit=0 swidth=0 blks 00:10:59.182 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:59.182 log =internal log bsize=4096 blocks=16384, version=2 00:10:59.182 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:59.182 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:59.182 Discarding blocks...Done. 00:10:59.182 14:25:06 -- common/autotest_common.sh@931 -- # return 0 00:10:59.182 14:25:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.088 14:25:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.088 14:25:07 -- target/filesystem.sh@25 -- # sync 00:11:01.088 14:25:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.088 14:25:07 -- target/filesystem.sh@27 -- # sync 00:11:01.088 14:25:07 -- target/filesystem.sh@29 -- # i=0 00:11:01.088 14:25:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.088 14:25:07 -- target/filesystem.sh@37 -- # kill -0 61342 00:11:01.088 14:25:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.088 14:25:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.088 14:25:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.088 14:25:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.088 ************************************ 00:11:01.088 END TEST filesystem_in_capsule_xfs 00:11:01.088 ************************************ 00:11:01.088 00:11:01.088 real 0m2.664s 00:11:01.088 user 0m0.028s 00:11:01.088 sys 0m0.059s 00:11:01.088 14:25:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:01.088 14:25:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.088 14:25:07 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:01.088 14:25:08 -- target/filesystem.sh@93 -- # sync 00:11:01.088 14:25:08 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.347 14:25:08 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.347 14:25:08 -- common/autotest_common.sh@1208 -- # local i=0 00:11:01.347 14:25:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:01.347 14:25:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.347 14:25:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:01.347 14:25:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.347 14:25:08 -- common/autotest_common.sh@1220 -- # return 0 00:11:01.347 14:25:08 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.347 14:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.347 14:25:08 -- common/autotest_common.sh@10 -- # set +x 00:11:01.347 14:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.347 14:25:08 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:01.347 14:25:08 -- target/filesystem.sh@101 -- # killprocess 61342 00:11:01.347 14:25:08 -- common/autotest_common.sh@936 -- # '[' -z 61342 ']' 00:11:01.347 14:25:08 -- common/autotest_common.sh@940 -- # kill -0 61342 00:11:01.347 14:25:08 -- common/autotest_common.sh@941 -- # uname 00:11:01.347 14:25:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.347 14:25:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61342 00:11:01.347 14:25:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:01.347 killing process with pid 61342 00:11:01.347 14:25:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:01.347 14:25:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61342' 00:11:01.347 14:25:08 -- common/autotest_common.sh@955 -- # kill 61342 00:11:01.347 14:25:08 -- common/autotest_common.sh@960 -- # wait 61342 00:11:02.283 ************************************ 00:11:02.283 END TEST nvmf_filesystem_in_capsule 00:11:02.283 ************************************ 00:11:02.283 14:25:08 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:02.283 00:11:02.283 real 0m14.603s 00:11:02.283 user 0m55.626s 00:11:02.283 sys 0m1.850s 00:11:02.283 14:25:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.283 14:25:08 -- common/autotest_common.sh@10 -- # set +x 00:11:02.283 14:25:08 -- target/filesystem.sh@108 -- # nvmftestfini 00:11:02.283 14:25:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:02.283 14:25:08 -- nvmf/common.sh@116 -- # sync 00:11:02.283 14:25:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:02.283 14:25:09 -- nvmf/common.sh@119 -- # set +e 00:11:02.283 14:25:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:02.283 14:25:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:02.283 rmmod nvme_tcp 00:11:02.283 rmmod nvme_fabrics 00:11:02.283 rmmod nvme_keyring 00:11:02.283 14:25:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:02.283 14:25:09 -- nvmf/common.sh@123 -- # set -e 00:11:02.283 14:25:09 -- nvmf/common.sh@124 -- # return 0 00:11:02.283 14:25:09 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:11:02.283 14:25:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:02.283 14:25:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:02.283 14:25:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:02.283 14:25:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.283 14:25:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:02.283 14:25:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.283 14:25:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.284 14:25:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.284 14:25:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:02.284 ************************************ 00:11:02.284 END TEST nvmf_filesystem 00:11:02.284 ************************************ 00:11:02.284 00:11:02.284 real 0m31.114s 00:11:02.284 user 1m54.688s 00:11:02.284 sys 0m4.288s 00:11:02.284 14:25:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.284 14:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:02.284 14:25:09 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:02.284 14:25:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.284 14:25:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.284 14:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:02.284 ************************************ 00:11:02.284 START TEST nvmf_discovery 00:11:02.284 ************************************ 00:11:02.284 14:25:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:02.284 * Looking for test storage... 00:11:02.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.284 14:25:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:02.284 14:25:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:02.284 14:25:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:02.542 14:25:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:02.542 14:25:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:02.542 14:25:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:02.542 14:25:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:02.542 14:25:09 -- scripts/common.sh@335 -- # IFS=.-: 00:11:02.542 14:25:09 -- scripts/common.sh@335 -- # read -ra ver1 00:11:02.542 14:25:09 -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.542 14:25:09 -- scripts/common.sh@336 -- # read -ra ver2 00:11:02.542 14:25:09 -- scripts/common.sh@337 -- # local 'op=<' 00:11:02.542 14:25:09 -- scripts/common.sh@339 -- # ver1_l=2 00:11:02.542 14:25:09 -- scripts/common.sh@340 -- # ver2_l=1 00:11:02.542 14:25:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:02.542 14:25:09 -- scripts/common.sh@343 -- # case "$op" in 00:11:02.542 14:25:09 -- scripts/common.sh@344 -- # : 1 00:11:02.542 14:25:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:02.542 14:25:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.542 14:25:09 -- scripts/common.sh@364 -- # decimal 1 00:11:02.543 14:25:09 -- scripts/common.sh@352 -- # local d=1 00:11:02.543 14:25:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.543 14:25:09 -- scripts/common.sh@354 -- # echo 1 00:11:02.543 14:25:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:02.543 14:25:09 -- scripts/common.sh@365 -- # decimal 2 00:11:02.543 14:25:09 -- scripts/common.sh@352 -- # local d=2 00:11:02.543 14:25:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.543 14:25:09 -- scripts/common.sh@354 -- # echo 2 00:11:02.543 14:25:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:02.543 14:25:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:02.543 14:25:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:02.543 14:25:09 -- scripts/common.sh@367 -- # return 0 00:11:02.543 14:25:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.543 14:25:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:02.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.543 --rc genhtml_branch_coverage=1 00:11:02.543 --rc genhtml_function_coverage=1 00:11:02.543 --rc genhtml_legend=1 00:11:02.543 --rc geninfo_all_blocks=1 00:11:02.543 --rc geninfo_unexecuted_blocks=1 00:11:02.543 00:11:02.543 ' 00:11:02.543 14:25:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:02.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.543 --rc genhtml_branch_coverage=1 00:11:02.543 --rc genhtml_function_coverage=1 00:11:02.543 --rc genhtml_legend=1 00:11:02.543 --rc geninfo_all_blocks=1 00:11:02.543 --rc geninfo_unexecuted_blocks=1 00:11:02.543 00:11:02.543 ' 00:11:02.543 14:25:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:02.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.543 --rc genhtml_branch_coverage=1 00:11:02.543 --rc genhtml_function_coverage=1 00:11:02.543 --rc genhtml_legend=1 00:11:02.543 --rc geninfo_all_blocks=1 00:11:02.543 --rc geninfo_unexecuted_blocks=1 00:11:02.543 00:11:02.543 ' 00:11:02.543 14:25:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:02.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.543 --rc genhtml_branch_coverage=1 00:11:02.543 --rc genhtml_function_coverage=1 00:11:02.543 --rc genhtml_legend=1 00:11:02.543 --rc geninfo_all_blocks=1 00:11:02.543 --rc geninfo_unexecuted_blocks=1 00:11:02.543 00:11:02.543 ' 00:11:02.543 14:25:09 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.543 14:25:09 -- nvmf/common.sh@7 -- # uname -s 00:11:02.543 14:25:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.543 14:25:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.543 14:25:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.543 14:25:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.543 14:25:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.543 14:25:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.543 14:25:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.543 14:25:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.543 14:25:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.543 14:25:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.543 14:25:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:11:02.543 14:25:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:11:02.543 14:25:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.543 14:25:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.543 14:25:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.543 14:25:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.543 14:25:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.543 14:25:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.543 14:25:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.543 14:25:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.543 14:25:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.543 14:25:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.543 14:25:09 -- paths/export.sh@5 -- # export PATH 00:11:02.543 14:25:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.543 14:25:09 -- nvmf/common.sh@46 -- # : 0 00:11:02.543 14:25:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:02.543 14:25:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:02.543 14:25:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:02.543 14:25:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.543 14:25:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.543 14:25:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:02.543 14:25:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:02.543 14:25:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:02.543 14:25:09 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:02.543 14:25:09 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:02.543 14:25:09 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:02.543 14:25:09 -- target/discovery.sh@15 -- # hash nvme 00:11:02.543 14:25:09 -- target/discovery.sh@20 -- # nvmftestinit 00:11:02.543 14:25:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:02.543 14:25:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.543 14:25:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:02.543 14:25:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:02.543 14:25:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:02.543 14:25:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.543 14:25:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.543 14:25:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.543 14:25:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:02.543 14:25:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:02.543 14:25:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:02.543 14:25:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:02.543 14:25:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:02.543 14:25:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:02.543 14:25:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.543 14:25:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.543 14:25:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.543 14:25:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:02.543 14:25:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.543 14:25:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.543 14:25:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.543 14:25:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.543 14:25:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.543 14:25:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.543 14:25:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.543 14:25:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.543 14:25:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:02.543 14:25:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:02.543 Cannot find device "nvmf_tgt_br" 00:11:02.543 14:25:09 -- nvmf/common.sh@154 -- # true 00:11:02.543 14:25:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.543 Cannot find device "nvmf_tgt_br2" 00:11:02.543 14:25:09 -- nvmf/common.sh@155 -- # true 00:11:02.543 14:25:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:02.543 14:25:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:02.543 Cannot find device "nvmf_tgt_br" 00:11:02.543 14:25:09 -- nvmf/common.sh@157 -- # true 00:11:02.543 14:25:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:02.543 Cannot find device "nvmf_tgt_br2" 00:11:02.543 14:25:09 -- nvmf/common.sh@158 -- # true 00:11:02.543 14:25:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:02.543 14:25:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:02.543 14:25:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.543 14:25:09 -- nvmf/common.sh@161 -- # true 00:11:02.543 14:25:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.543 14:25:09 -- nvmf/common.sh@162 -- # true 00:11:02.543 14:25:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.543 14:25:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.811 14:25:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.811 14:25:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.811 14:25:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.811 14:25:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.811 14:25:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.811 14:25:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:02.811 14:25:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:02.811 14:25:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:02.811 14:25:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:02.811 14:25:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:02.811 14:25:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:02.811 14:25:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.811 14:25:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.811 14:25:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.811 14:25:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:02.811 14:25:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:02.811 14:25:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.811 14:25:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.811 14:25:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:02.811 14:25:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:02.811 14:25:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:02.811 14:25:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:02.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:11:02.811 00:11:02.811 --- 10.0.0.2 ping statistics --- 00:11:02.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.811 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:02.811 14:25:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:02.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:02.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:11:02.811 00:11:02.811 --- 10.0.0.3 ping statistics --- 00:11:02.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.811 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:02.811 14:25:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:02.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:02.811 00:11:02.811 --- 10.0.0.1 ping statistics --- 00:11:02.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.811 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:02.811 14:25:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.811 14:25:09 -- nvmf/common.sh@421 -- # return 0 00:11:02.811 14:25:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:02.811 14:25:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.812 14:25:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:02.812 14:25:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:02.812 14:25:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.812 14:25:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:02.812 14:25:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:02.812 14:25:09 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:02.812 14:25:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:02.812 14:25:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:02.812 14:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:02.812 14:25:09 -- nvmf/common.sh@469 -- # nvmfpid=61894 00:11:02.812 14:25:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.812 14:25:09 -- nvmf/common.sh@470 -- # waitforlisten 61894 00:11:02.812 14:25:09 -- common/autotest_common.sh@829 -- # '[' -z 61894 ']' 00:11:02.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.812 14:25:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.812 14:25:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.812 14:25:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.812 14:25:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.812 14:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:02.812 [2024-12-06 14:25:09.775393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:02.812 [2024-12-06 14:25:09.775551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.069 [2024-12-06 14:25:09.916590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.328 [2024-12-06 14:25:10.055106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:03.328 [2024-12-06 14:25:10.055300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.328 [2024-12-06 14:25:10.055317] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.328 [2024-12-06 14:25:10.055329] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.328 [2024-12-06 14:25:10.055496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.328 [2024-12-06 14:25:10.056323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.328 [2024-12-06 14:25:10.056499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.328 [2024-12-06 14:25:10.056507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.895 14:25:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.895 14:25:10 -- common/autotest_common.sh@862 -- # return 0 00:11:03.895 14:25:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:03.895 14:25:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:03.895 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:03.895 14:25:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.895 14:25:10 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.895 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.895 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:03.895 [2024-12-06 14:25:10.830874] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.154 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.154 14:25:10 -- target/discovery.sh@26 -- # seq 1 4 00:11:04.154 14:25:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:04.154 14:25:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:04.154 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.154 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.154 Null1 00:11:04.154 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.154 14:25:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.154 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.154 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.154 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.154 14:25:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:04.154 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.154 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.154 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.154 14:25:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.154 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.154 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.154 [2024-12-06 14:25:10.904040] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.154 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.154 14:25:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:04.154 14:25:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 Null2 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:04.155 14:25:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 Null3 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:04.155 14:25:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 Null4 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:04.155 14:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:11 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:04.155 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:11 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:04.155 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.155 14:25:11 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 4420 00:11:04.414 00:11:04.414 Discovery Log Number of Records 6, Generation counter 6 00:11:04.414 =====Discovery Log Entry 0====== 00:11:04.414 trtype: tcp 00:11:04.414 adrfam: ipv4 00:11:04.414 subtype: current discovery subsystem 00:11:04.414 treq: not required 00:11:04.414 portid: 0 00:11:04.414 trsvcid: 4420 00:11:04.414 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:04.414 traddr: 10.0.0.2 00:11:04.414 eflags: explicit discovery connections, duplicate discovery information 00:11:04.414 sectype: none 00:11:04.414 =====Discovery Log Entry 1====== 00:11:04.414 trtype: tcp 00:11:04.414 adrfam: ipv4 00:11:04.414 subtype: nvme subsystem 00:11:04.414 treq: not required 00:11:04.414 portid: 0 00:11:04.414 trsvcid: 4420 00:11:04.414 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:04.414 traddr: 10.0.0.2 00:11:04.414 eflags: none 00:11:04.414 sectype: none 00:11:04.414 =====Discovery Log Entry 2====== 00:11:04.414 trtype: tcp 00:11:04.414 adrfam: ipv4 00:11:04.414 subtype: nvme subsystem 00:11:04.414 treq: not required 00:11:04.414 portid: 0 00:11:04.414 trsvcid: 4420 00:11:04.414 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:04.414 traddr: 10.0.0.2 00:11:04.414 eflags: none 00:11:04.414 sectype: none 00:11:04.414 =====Discovery Log Entry 3====== 00:11:04.414 trtype: tcp 00:11:04.414 adrfam: ipv4 00:11:04.414 subtype: nvme subsystem 00:11:04.414 treq: not required 00:11:04.414 portid: 0 00:11:04.414 trsvcid: 4420 00:11:04.414 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:04.414 traddr: 10.0.0.2 00:11:04.414 eflags: none 00:11:04.414 sectype: none 00:11:04.414 =====Discovery Log Entry 4====== 00:11:04.414 trtype: tcp 00:11:04.414 adrfam: ipv4 00:11:04.414 subtype: nvme subsystem 00:11:04.414 treq: not required 00:11:04.414 portid: 0 00:11:04.414 trsvcid: 4420 00:11:04.414 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:04.414 traddr: 10.0.0.2 00:11:04.414 eflags: none 00:11:04.414 sectype: none 00:11:04.414 =====Discovery Log Entry 5====== 00:11:04.414 trtype: tcp 00:11:04.414 adrfam: ipv4 00:11:04.414 subtype: discovery subsystem referral 00:11:04.414 treq: not required 00:11:04.414 portid: 0 00:11:04.414 trsvcid: 4430 00:11:04.414 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:04.414 traddr: 10.0.0.2 00:11:04.414 eflags: none 00:11:04.414 sectype: none 00:11:04.414 Perform nvmf subsystem discovery via RPC 00:11:04.414 14:25:11 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:04.414 14:25:11 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:04.414 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.414 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.414 [2024-12-06 14:25:11.136059] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:04.414 [ 00:11:04.414 { 00:11:04.414 "allow_any_host": true, 00:11:04.414 "hosts": [], 00:11:04.414 "listen_addresses": [ 00:11:04.414 { 00:11:04.414 "adrfam": "IPv4", 00:11:04.414 "traddr": "10.0.0.2", 00:11:04.414 "transport": "TCP", 00:11:04.414 "trsvcid": "4420", 00:11:04.414 "trtype": "TCP" 00:11:04.414 } 00:11:04.414 ], 00:11:04.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:04.414 "subtype": "Discovery" 00:11:04.414 }, 00:11:04.415 { 00:11:04.415 "allow_any_host": true, 00:11:04.415 "hosts": [], 00:11:04.415 "listen_addresses": [ 00:11:04.415 { 00:11:04.415 "adrfam": "IPv4", 00:11:04.415 "traddr": "10.0.0.2", 00:11:04.415 "transport": "TCP", 00:11:04.415 "trsvcid": "4420", 00:11:04.415 "trtype": "TCP" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "max_cntlid": 65519, 00:11:04.415 "max_namespaces": 32, 00:11:04.415 "min_cntlid": 1, 00:11:04.415 "model_number": "SPDK bdev Controller", 00:11:04.415 "namespaces": [ 00:11:04.415 { 00:11:04.415 "bdev_name": "Null1", 00:11:04.415 "name": "Null1", 00:11:04.415 "nguid": "160A871EFC3B410192668D2E19658591", 00:11:04.415 "nsid": 1, 00:11:04.415 "uuid": "160a871e-fc3b-4101-9266-8d2e19658591" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.415 "serial_number": "SPDK00000000000001", 00:11:04.415 "subtype": "NVMe" 00:11:04.415 }, 00:11:04.415 { 00:11:04.415 "allow_any_host": true, 00:11:04.415 "hosts": [], 00:11:04.415 "listen_addresses": [ 00:11:04.415 { 00:11:04.415 "adrfam": "IPv4", 00:11:04.415 "traddr": "10.0.0.2", 00:11:04.415 "transport": "TCP", 00:11:04.415 "trsvcid": "4420", 00:11:04.415 "trtype": "TCP" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "max_cntlid": 65519, 00:11:04.415 "max_namespaces": 32, 00:11:04.415 "min_cntlid": 1, 00:11:04.415 "model_number": "SPDK bdev Controller", 00:11:04.415 "namespaces": [ 00:11:04.415 { 00:11:04.415 "bdev_name": "Null2", 00:11:04.415 "name": "Null2", 00:11:04.415 "nguid": "D64B8417ED9A4326B8A03FD4B7F0FFC9", 00:11:04.415 "nsid": 1, 00:11:04.415 "uuid": "d64b8417-ed9a-4326-b8a0-3fd4b7f0ffc9" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:04.415 "serial_number": "SPDK00000000000002", 00:11:04.415 "subtype": "NVMe" 00:11:04.415 }, 00:11:04.415 { 00:11:04.415 "allow_any_host": true, 00:11:04.415 "hosts": [], 00:11:04.415 "listen_addresses": [ 00:11:04.415 { 00:11:04.415 "adrfam": "IPv4", 00:11:04.415 "traddr": "10.0.0.2", 00:11:04.415 "transport": "TCP", 00:11:04.415 "trsvcid": "4420", 00:11:04.415 "trtype": "TCP" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "max_cntlid": 65519, 00:11:04.415 "max_namespaces": 32, 00:11:04.415 "min_cntlid": 1, 00:11:04.415 "model_number": "SPDK bdev Controller", 00:11:04.415 "namespaces": [ 00:11:04.415 { 00:11:04.415 "bdev_name": "Null3", 00:11:04.415 "name": "Null3", 00:11:04.415 "nguid": "10E021360F6940398166DFC472034B31", 00:11:04.415 "nsid": 1, 00:11:04.415 "uuid": "10e02136-0f69-4039-8166-dfc472034b31" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:04.415 "serial_number": "SPDK00000000000003", 00:11:04.415 "subtype": "NVMe" 00:11:04.415 }, 00:11:04.415 { 00:11:04.415 "allow_any_host": true, 00:11:04.415 "hosts": [], 00:11:04.415 "listen_addresses": [ 00:11:04.415 { 00:11:04.415 "adrfam": "IPv4", 00:11:04.415 "traddr": "10.0.0.2", 00:11:04.415 "transport": "TCP", 00:11:04.415 "trsvcid": "4420", 00:11:04.415 "trtype": "TCP" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "max_cntlid": 65519, 00:11:04.415 "max_namespaces": 32, 00:11:04.415 "min_cntlid": 1, 00:11:04.415 "model_number": "SPDK bdev Controller", 00:11:04.415 "namespaces": [ 00:11:04.415 { 00:11:04.415 "bdev_name": "Null4", 00:11:04.415 "name": "Null4", 00:11:04.415 "nguid": "1747AE7115554A0D86729938C190BDD3", 00:11:04.415 "nsid": 1, 00:11:04.415 "uuid": "1747ae71-1555-4a0d-8672-9938c190bdd3" 00:11:04.415 } 00:11:04.415 ], 00:11:04.415 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:04.415 "serial_number": "SPDK00000000000004", 00:11:04.415 "subtype": "NVMe" 00:11:04.415 } 00:11:04.415 ] 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@42 -- # seq 1 4 00:11:04.415 14:25:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:04.415 14:25:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:04.415 14:25:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:04.415 14:25:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:04.415 14:25:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:04.415 14:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.415 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 14:25:11 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:04.415 14:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.415 14:25:11 -- target/discovery.sh@49 -- # check_bdevs= 00:11:04.415 14:25:11 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:04.415 14:25:11 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:04.415 14:25:11 -- target/discovery.sh@57 -- # nvmftestfini 00:11:04.415 14:25:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:04.415 14:25:11 -- nvmf/common.sh@116 -- # sync 00:11:04.415 14:25:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:04.415 14:25:11 -- nvmf/common.sh@119 -- # set +e 00:11:04.415 14:25:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:04.415 14:25:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:04.415 rmmod nvme_tcp 00:11:04.415 rmmod nvme_fabrics 00:11:04.415 rmmod nvme_keyring 00:11:04.673 14:25:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:04.673 14:25:11 -- nvmf/common.sh@123 -- # set -e 00:11:04.673 14:25:11 -- nvmf/common.sh@124 -- # return 0 00:11:04.674 14:25:11 -- nvmf/common.sh@477 -- # '[' -n 61894 ']' 00:11:04.674 14:25:11 -- nvmf/common.sh@478 -- # killprocess 61894 00:11:04.674 14:25:11 -- common/autotest_common.sh@936 -- # '[' -z 61894 ']' 00:11:04.674 14:25:11 -- common/autotest_common.sh@940 -- # kill -0 61894 00:11:04.674 14:25:11 -- common/autotest_common.sh@941 -- # uname 00:11:04.674 14:25:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.674 14:25:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61894 00:11:04.674 killing process with pid 61894 00:11:04.674 14:25:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:04.674 14:25:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:04.674 14:25:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61894' 00:11:04.674 14:25:11 -- common/autotest_common.sh@955 -- # kill 61894 00:11:04.674 [2024-12-06 14:25:11.439332] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:04.674 14:25:11 -- common/autotest_common.sh@960 -- # wait 61894 00:11:04.932 14:25:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:04.932 14:25:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:04.932 14:25:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:04.932 14:25:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.932 14:25:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:04.932 14:25:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.932 14:25:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.932 14:25:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.932 14:25:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:04.932 00:11:04.932 real 0m2.617s 00:11:04.932 user 0m6.913s 00:11:04.932 sys 0m0.626s 00:11:04.932 14:25:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.932 ************************************ 00:11:04.932 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 END TEST nvmf_discovery 00:11:04.932 ************************************ 00:11:04.932 14:25:11 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.932 14:25:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:04.932 14:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.932 14:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 ************************************ 00:11:04.932 START TEST nvmf_referrals 00:11:04.932 ************************************ 00:11:04.932 14:25:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.932 * Looking for test storage... 00:11:05.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:05.191 14:25:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.191 14:25:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.191 14:25:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.191 14:25:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.191 14:25:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.191 14:25:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.191 14:25:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.191 14:25:12 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.191 14:25:12 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.191 14:25:12 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.191 14:25:12 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.191 14:25:12 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.191 14:25:12 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.191 14:25:12 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.191 14:25:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.191 14:25:12 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.191 14:25:12 -- scripts/common.sh@344 -- # : 1 00:11:05.191 14:25:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.191 14:25:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.191 14:25:12 -- scripts/common.sh@364 -- # decimal 1 00:11:05.191 14:25:12 -- scripts/common.sh@352 -- # local d=1 00:11:05.191 14:25:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.191 14:25:12 -- scripts/common.sh@354 -- # echo 1 00:11:05.191 14:25:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.191 14:25:12 -- scripts/common.sh@365 -- # decimal 2 00:11:05.191 14:25:12 -- scripts/common.sh@352 -- # local d=2 00:11:05.191 14:25:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.191 14:25:12 -- scripts/common.sh@354 -- # echo 2 00:11:05.191 14:25:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.191 14:25:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.191 14:25:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.191 14:25:12 -- scripts/common.sh@367 -- # return 0 00:11:05.191 14:25:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.191 14:25:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.191 --rc genhtml_branch_coverage=1 00:11:05.191 --rc genhtml_function_coverage=1 00:11:05.191 --rc genhtml_legend=1 00:11:05.191 --rc geninfo_all_blocks=1 00:11:05.191 --rc geninfo_unexecuted_blocks=1 00:11:05.191 00:11:05.191 ' 00:11:05.191 14:25:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.191 --rc genhtml_branch_coverage=1 00:11:05.191 --rc genhtml_function_coverage=1 00:11:05.191 --rc genhtml_legend=1 00:11:05.191 --rc geninfo_all_blocks=1 00:11:05.191 --rc geninfo_unexecuted_blocks=1 00:11:05.191 00:11:05.191 ' 00:11:05.191 14:25:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.191 --rc genhtml_branch_coverage=1 00:11:05.191 --rc genhtml_function_coverage=1 00:11:05.191 --rc genhtml_legend=1 00:11:05.191 --rc geninfo_all_blocks=1 00:11:05.191 --rc geninfo_unexecuted_blocks=1 00:11:05.191 00:11:05.191 ' 00:11:05.191 14:25:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.191 --rc genhtml_branch_coverage=1 00:11:05.191 --rc genhtml_function_coverage=1 00:11:05.191 --rc genhtml_legend=1 00:11:05.191 --rc geninfo_all_blocks=1 00:11:05.191 --rc geninfo_unexecuted_blocks=1 00:11:05.191 00:11:05.191 ' 00:11:05.191 14:25:12 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:05.191 14:25:12 -- nvmf/common.sh@7 -- # uname -s 00:11:05.191 14:25:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.191 14:25:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.191 14:25:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.191 14:25:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.191 14:25:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.191 14:25:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.191 14:25:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.191 14:25:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.191 14:25:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.191 14:25:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.191 14:25:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:11:05.191 14:25:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:11:05.191 14:25:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.191 14:25:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.191 14:25:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:05.191 14:25:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.191 14:25:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.191 14:25:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.191 14:25:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.192 14:25:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.192 14:25:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.192 14:25:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.192 14:25:12 -- paths/export.sh@5 -- # export PATH 00:11:05.192 14:25:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.192 14:25:12 -- nvmf/common.sh@46 -- # : 0 00:11:05.192 14:25:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:05.192 14:25:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:05.192 14:25:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:05.192 14:25:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.192 14:25:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.192 14:25:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:05.192 14:25:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:05.192 14:25:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:05.192 14:25:12 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:05.192 14:25:12 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:05.192 14:25:12 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:05.192 14:25:12 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:05.192 14:25:12 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:05.192 14:25:12 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:05.192 14:25:12 -- target/referrals.sh@37 -- # nvmftestinit 00:11:05.192 14:25:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:05.192 14:25:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.192 14:25:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:05.192 14:25:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:05.192 14:25:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:05.192 14:25:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.192 14:25:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.192 14:25:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.192 14:25:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:05.192 14:25:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:05.192 14:25:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:05.192 14:25:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:05.192 14:25:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:05.192 14:25:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:05.192 14:25:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.192 14:25:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.192 14:25:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:05.192 14:25:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:05.192 14:25:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:05.192 14:25:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:05.192 14:25:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:05.192 14:25:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.192 14:25:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:05.192 14:25:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:05.192 14:25:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:05.192 14:25:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:05.192 14:25:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:05.192 14:25:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:05.192 Cannot find device "nvmf_tgt_br" 00:11:05.192 14:25:12 -- nvmf/common.sh@154 -- # true 00:11:05.192 14:25:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.192 Cannot find device "nvmf_tgt_br2" 00:11:05.192 14:25:12 -- nvmf/common.sh@155 -- # true 00:11:05.192 14:25:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:05.192 14:25:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:05.192 Cannot find device "nvmf_tgt_br" 00:11:05.192 14:25:12 -- nvmf/common.sh@157 -- # true 00:11:05.192 14:25:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:05.192 Cannot find device "nvmf_tgt_br2" 00:11:05.192 14:25:12 -- nvmf/common.sh@158 -- # true 00:11:05.192 14:25:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:05.451 14:25:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:05.451 14:25:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.451 14:25:12 -- nvmf/common.sh@161 -- # true 00:11:05.451 14:25:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.451 14:25:12 -- nvmf/common.sh@162 -- # true 00:11:05.451 14:25:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:05.451 14:25:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:05.451 14:25:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:05.451 14:25:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:05.451 14:25:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:05.451 14:25:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:05.451 14:25:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:05.451 14:25:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:05.451 14:25:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.451 14:25:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:05.451 14:25:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:05.451 14:25:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:05.451 14:25:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:05.451 14:25:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.451 14:25:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.451 14:25:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.451 14:25:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:05.451 14:25:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:05.451 14:25:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.451 14:25:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.451 14:25:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.709 14:25:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.709 14:25:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.709 14:25:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:05.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:11:05.709 00:11:05.709 --- 10.0.0.2 ping statistics --- 00:11:05.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.709 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:05.709 14:25:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:05.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:05.710 00:11:05.710 --- 10.0.0.3 ping statistics --- 00:11:05.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.710 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:05.710 14:25:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:05.710 00:11:05.710 --- 10.0.0.1 ping statistics --- 00:11:05.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.710 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:05.710 14:25:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.710 14:25:12 -- nvmf/common.sh@421 -- # return 0 00:11:05.710 14:25:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:05.710 14:25:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.710 14:25:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:05.710 14:25:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:05.710 14:25:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.710 14:25:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:05.710 14:25:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:05.710 14:25:12 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:05.710 14:25:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:05.710 14:25:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.710 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:11:05.710 14:25:12 -- nvmf/common.sh@469 -- # nvmfpid=62134 00:11:05.710 14:25:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.710 14:25:12 -- nvmf/common.sh@470 -- # waitforlisten 62134 00:11:05.710 14:25:12 -- common/autotest_common.sh@829 -- # '[' -z 62134 ']' 00:11:05.710 14:25:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.710 14:25:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.710 14:25:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.710 14:25:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.710 14:25:12 -- common/autotest_common.sh@10 -- # set +x 00:11:05.710 [2024-12-06 14:25:12.539101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.710 [2024-12-06 14:25:12.539678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.064 [2024-12-06 14:25:12.684912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.064 [2024-12-06 14:25:12.823654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.064 [2024-12-06 14:25:12.823866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.064 [2024-12-06 14:25:12.823884] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.064 [2024-12-06 14:25:12.823895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.064 [2024-12-06 14:25:12.824201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.064 [2024-12-06 14:25:12.824542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.064 [2024-12-06 14:25:12.824614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.064 [2024-12-06 14:25:12.824620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.630 14:25:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.630 14:25:13 -- common/autotest_common.sh@862 -- # return 0 00:11:06.630 14:25:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:06.630 14:25:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.630 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 14:25:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.887 14:25:13 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.887 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.887 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 [2024-12-06 14:25:13.630168] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.887 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.887 14:25:13 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:06.887 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.887 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 [2024-12-06 14:25:13.657472] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:06.887 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.887 14:25:13 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.887 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.887 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.887 14:25:13 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.887 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.887 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.887 14:25:13 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.887 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.887 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.887 14:25:13 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.887 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.887 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 14:25:13 -- target/referrals.sh@48 -- # jq length 00:11:06.887 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.887 14:25:13 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:06.888 14:25:13 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:06.888 14:25:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:06.888 14:25:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.888 14:25:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:06.888 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.888 14:25:13 -- target/referrals.sh@21 -- # sort 00:11:06.888 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.888 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.888 14:25:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.888 14:25:13 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.888 14:25:13 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:06.888 14:25:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.888 14:25:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.888 14:25:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.888 14:25:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.888 14:25:13 -- target/referrals.sh@26 -- # sort 00:11:07.146 14:25:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:07.146 14:25:13 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:07.146 14:25:13 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:07.146 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.146 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.146 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.146 14:25:13 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:07.146 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.146 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.146 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.146 14:25:13 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:07.146 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.146 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.146 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.146 14:25:13 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.146 14:25:13 -- target/referrals.sh@56 -- # jq length 00:11:07.146 14:25:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.146 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.146 14:25:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.146 14:25:14 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:07.146 14:25:14 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:07.146 14:25:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.146 14:25:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.146 14:25:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.146 14:25:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.146 14:25:14 -- target/referrals.sh@26 -- # sort 00:11:07.404 14:25:14 -- target/referrals.sh@26 -- # echo 00:11:07.404 14:25:14 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:07.404 14:25:14 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:07.404 14:25:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.404 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.404 14:25:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.404 14:25:14 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.404 14:25:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.404 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.404 14:25:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.404 14:25:14 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:07.404 14:25:14 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.404 14:25:14 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.404 14:25:14 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.404 14:25:14 -- target/referrals.sh@21 -- # sort 00:11:07.404 14:25:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.404 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.404 14:25:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.404 14:25:14 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:07.404 14:25:14 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.404 14:25:14 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:07.404 14:25:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.404 14:25:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.404 14:25:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.404 14:25:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.404 14:25:14 -- target/referrals.sh@26 -- # sort 00:11:07.662 14:25:14 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:07.662 14:25:14 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.662 14:25:14 -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:07.662 14:25:14 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:07.662 14:25:14 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.662 14:25:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.662 14:25:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.662 14:25:14 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.662 14:25:14 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.662 14:25:14 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.662 14:25:14 -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:07.662 14:25:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.662 14:25:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.662 14:25:14 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.662 14:25:14 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.662 14:25:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.662 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.920 14:25:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.920 14:25:14 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:07.920 14:25:14 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.920 14:25:14 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.920 14:25:14 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.920 14:25:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.920 14:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.920 14:25:14 -- target/referrals.sh@21 -- # sort 00:11:07.920 14:25:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.920 14:25:14 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:07.920 14:25:14 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.920 14:25:14 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:07.920 14:25:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.920 14:25:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.920 14:25:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.920 14:25:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.920 14:25:14 -- target/referrals.sh@26 -- # sort 00:11:07.920 14:25:14 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:07.920 14:25:14 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.920 14:25:14 -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:07.920 14:25:14 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:07.920 14:25:14 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.920 14:25:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.920 14:25:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:08.178 14:25:14 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:08.178 14:25:14 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:08.178 14:25:14 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:08.178 14:25:14 -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:08.178 14:25:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:08.178 14:25:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.178 14:25:15 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:08.178 14:25:15 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:08.178 14:25:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.178 14:25:15 -- common/autotest_common.sh@10 -- # set +x 00:11:08.178 14:25:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.178 14:25:15 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.178 14:25:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.178 14:25:15 -- target/referrals.sh@82 -- # jq length 00:11:08.178 14:25:15 -- common/autotest_common.sh@10 -- # set +x 00:11:08.178 14:25:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.178 14:25:15 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:08.178 14:25:15 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:08.178 14:25:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.178 14:25:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.178 14:25:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.178 14:25:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.178 14:25:15 -- target/referrals.sh@26 -- # sort 00:11:08.435 14:25:15 -- target/referrals.sh@26 -- # echo 00:11:08.435 14:25:15 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:08.435 14:25:15 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:08.435 14:25:15 -- target/referrals.sh@86 -- # nvmftestfini 00:11:08.435 14:25:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:08.435 14:25:15 -- nvmf/common.sh@116 -- # sync 00:11:08.435 14:25:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:08.435 14:25:15 -- nvmf/common.sh@119 -- # set +e 00:11:08.435 14:25:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:08.435 14:25:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:08.435 rmmod nvme_tcp 00:11:08.435 rmmod nvme_fabrics 00:11:08.435 rmmod nvme_keyring 00:11:08.435 14:25:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:08.435 14:25:15 -- nvmf/common.sh@123 -- # set -e 00:11:08.435 14:25:15 -- nvmf/common.sh@124 -- # return 0 00:11:08.435 14:25:15 -- nvmf/common.sh@477 -- # '[' -n 62134 ']' 00:11:08.435 14:25:15 -- nvmf/common.sh@478 -- # killprocess 62134 00:11:08.435 14:25:15 -- common/autotest_common.sh@936 -- # '[' -z 62134 ']' 00:11:08.435 14:25:15 -- common/autotest_common.sh@940 -- # kill -0 62134 00:11:08.435 14:25:15 -- common/autotest_common.sh@941 -- # uname 00:11:08.435 14:25:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.435 14:25:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62134 00:11:08.693 14:25:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.693 14:25:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.693 14:25:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62134' 00:11:08.693 killing process with pid 62134 00:11:08.693 14:25:15 -- common/autotest_common.sh@955 -- # kill 62134 00:11:08.693 14:25:15 -- common/autotest_common.sh@960 -- # wait 62134 00:11:09.257 14:25:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:09.257 14:25:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:09.257 14:25:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:09.257 14:25:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.257 14:25:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:09.257 14:25:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.257 14:25:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.257 14:25:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.257 14:25:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:09.257 00:11:09.257 real 0m4.188s 00:11:09.257 user 0m13.320s 00:11:09.257 sys 0m1.047s 00:11:09.257 14:25:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:09.257 ************************************ 00:11:09.257 END TEST nvmf_referrals 00:11:09.257 ************************************ 00:11:09.257 14:25:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.257 14:25:16 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:09.257 14:25:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:09.257 14:25:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.257 14:25:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.257 ************************************ 00:11:09.257 START TEST nvmf_connect_disconnect 00:11:09.257 ************************************ 00:11:09.257 14:25:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:09.257 * Looking for test storage... 00:11:09.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.257 14:25:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:09.257 14:25:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:09.257 14:25:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:09.524 14:25:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:09.524 14:25:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:09.524 14:25:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:09.524 14:25:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:09.524 14:25:16 -- scripts/common.sh@335 -- # IFS=.-: 00:11:09.525 14:25:16 -- scripts/common.sh@335 -- # read -ra ver1 00:11:09.525 14:25:16 -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.525 14:25:16 -- scripts/common.sh@336 -- # read -ra ver2 00:11:09.525 14:25:16 -- scripts/common.sh@337 -- # local 'op=<' 00:11:09.525 14:25:16 -- scripts/common.sh@339 -- # ver1_l=2 00:11:09.525 14:25:16 -- scripts/common.sh@340 -- # ver2_l=1 00:11:09.525 14:25:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:09.525 14:25:16 -- scripts/common.sh@343 -- # case "$op" in 00:11:09.525 14:25:16 -- scripts/common.sh@344 -- # : 1 00:11:09.525 14:25:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:09.525 14:25:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.525 14:25:16 -- scripts/common.sh@364 -- # decimal 1 00:11:09.525 14:25:16 -- scripts/common.sh@352 -- # local d=1 00:11:09.525 14:25:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.525 14:25:16 -- scripts/common.sh@354 -- # echo 1 00:11:09.525 14:25:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:09.525 14:25:16 -- scripts/common.sh@365 -- # decimal 2 00:11:09.525 14:25:16 -- scripts/common.sh@352 -- # local d=2 00:11:09.525 14:25:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.525 14:25:16 -- scripts/common.sh@354 -- # echo 2 00:11:09.525 14:25:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:09.525 14:25:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:09.525 14:25:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:09.525 14:25:16 -- scripts/common.sh@367 -- # return 0 00:11:09.525 14:25:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.525 14:25:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:09.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.525 --rc genhtml_branch_coverage=1 00:11:09.525 --rc genhtml_function_coverage=1 00:11:09.525 --rc genhtml_legend=1 00:11:09.525 --rc geninfo_all_blocks=1 00:11:09.525 --rc geninfo_unexecuted_blocks=1 00:11:09.525 00:11:09.525 ' 00:11:09.525 14:25:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:09.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.525 --rc genhtml_branch_coverage=1 00:11:09.525 --rc genhtml_function_coverage=1 00:11:09.525 --rc genhtml_legend=1 00:11:09.525 --rc geninfo_all_blocks=1 00:11:09.525 --rc geninfo_unexecuted_blocks=1 00:11:09.525 00:11:09.525 ' 00:11:09.525 14:25:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:09.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.525 --rc genhtml_branch_coverage=1 00:11:09.525 --rc genhtml_function_coverage=1 00:11:09.525 --rc genhtml_legend=1 00:11:09.525 --rc geninfo_all_blocks=1 00:11:09.525 --rc geninfo_unexecuted_blocks=1 00:11:09.525 00:11:09.525 ' 00:11:09.525 14:25:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:09.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.525 --rc genhtml_branch_coverage=1 00:11:09.525 --rc genhtml_function_coverage=1 00:11:09.525 --rc genhtml_legend=1 00:11:09.525 --rc geninfo_all_blocks=1 00:11:09.525 --rc geninfo_unexecuted_blocks=1 00:11:09.525 00:11:09.525 ' 00:11:09.525 14:25:16 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.525 14:25:16 -- nvmf/common.sh@7 -- # uname -s 00:11:09.525 14:25:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.525 14:25:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.525 14:25:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.525 14:25:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.525 14:25:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.525 14:25:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.525 14:25:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.525 14:25:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.525 14:25:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.525 14:25:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.525 14:25:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:11:09.525 14:25:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:11:09.525 14:25:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.525 14:25:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.525 14:25:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.525 14:25:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.525 14:25:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.525 14:25:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.525 14:25:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.525 14:25:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.525 14:25:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.525 14:25:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.525 14:25:16 -- paths/export.sh@5 -- # export PATH 00:11:09.525 14:25:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.525 14:25:16 -- nvmf/common.sh@46 -- # : 0 00:11:09.525 14:25:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:09.525 14:25:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:09.525 14:25:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:09.525 14:25:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.525 14:25:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.525 14:25:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:09.525 14:25:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:09.525 14:25:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:09.525 14:25:16 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.525 14:25:16 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.525 14:25:16 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:09.525 14:25:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:09.525 14:25:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.525 14:25:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:09.525 14:25:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:09.525 14:25:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:09.525 14:25:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.525 14:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.525 14:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.525 14:25:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:09.525 14:25:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:09.525 14:25:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:09.525 14:25:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:09.525 14:25:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:09.525 14:25:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:09.525 14:25:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.525 14:25:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.525 14:25:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:09.525 14:25:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:09.525 14:25:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.525 14:25:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.525 14:25:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.525 14:25:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.525 14:25:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.525 14:25:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.525 14:25:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.525 14:25:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.525 14:25:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:09.525 14:25:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:09.525 Cannot find device "nvmf_tgt_br" 00:11:09.525 14:25:16 -- nvmf/common.sh@154 -- # true 00:11:09.525 14:25:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.525 Cannot find device "nvmf_tgt_br2" 00:11:09.525 14:25:16 -- nvmf/common.sh@155 -- # true 00:11:09.525 14:25:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:09.525 14:25:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:09.525 Cannot find device "nvmf_tgt_br" 00:11:09.525 14:25:16 -- nvmf/common.sh@157 -- # true 00:11:09.525 14:25:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:09.525 Cannot find device "nvmf_tgt_br2" 00:11:09.525 14:25:16 -- nvmf/common.sh@158 -- # true 00:11:09.525 14:25:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:09.525 14:25:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:09.525 14:25:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.525 14:25:16 -- nvmf/common.sh@161 -- # true 00:11:09.525 14:25:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.525 14:25:16 -- nvmf/common.sh@162 -- # true 00:11:09.525 14:25:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.525 14:25:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.525 14:25:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.525 14:25:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.525 14:25:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.525 14:25:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.806 14:25:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.806 14:25:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.806 14:25:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.806 14:25:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:09.806 14:25:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:09.806 14:25:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:09.806 14:25:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:09.806 14:25:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.806 14:25:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.806 14:25:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.806 14:25:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:09.806 14:25:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:09.806 14:25:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.806 14:25:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.806 14:25:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.806 14:25:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.806 14:25:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.806 14:25:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:09.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:09.806 00:11:09.806 --- 10.0.0.2 ping statistics --- 00:11:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.806 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:09.806 14:25:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:09.806 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.806 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:11:09.806 00:11:09.806 --- 10.0.0.3 ping statistics --- 00:11:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.806 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:09.806 14:25:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:09.806 00:11:09.806 --- 10.0.0.1 ping statistics --- 00:11:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.806 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:09.806 14:25:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.806 14:25:16 -- nvmf/common.sh@421 -- # return 0 00:11:09.806 14:25:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:09.806 14:25:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.806 14:25:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:09.806 14:25:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:09.806 14:25:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.806 14:25:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:09.806 14:25:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:09.806 14:25:16 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:09.806 14:25:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:09.806 14:25:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:09.806 14:25:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.806 14:25:16 -- nvmf/common.sh@469 -- # nvmfpid=62455 00:11:09.806 14:25:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.806 14:25:16 -- nvmf/common.sh@470 -- # waitforlisten 62455 00:11:09.806 14:25:16 -- common/autotest_common.sh@829 -- # '[' -z 62455 ']' 00:11:09.806 14:25:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.806 14:25:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.806 14:25:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.806 14:25:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.806 14:25:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.806 [2024-12-06 14:25:16.749267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:09.806 [2024-12-06 14:25:16.749394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.065 [2024-12-06 14:25:16.895683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.323 [2024-12-06 14:25:17.039293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:10.323 [2024-12-06 14:25:17.039842] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.323 [2024-12-06 14:25:17.040039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.323 [2024-12-06 14:25:17.040216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.323 [2024-12-06 14:25:17.040432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.323 [2024-12-06 14:25:17.041662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.323 [2024-12-06 14:25:17.041836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.323 [2024-12-06 14:25:17.041844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.888 14:25:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.888 14:25:17 -- common/autotest_common.sh@862 -- # return 0 00:11:10.888 14:25:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:10.888 14:25:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.888 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 14:25:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:11.146 14:25:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.146 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 [2024-12-06 14:25:17.880710] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.146 14:25:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:11.146 14:25:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.146 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 14:25:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:11.146 14:25:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.146 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 14:25:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.146 14:25:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.146 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 14:25:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.146 14:25:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.146 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 [2024-12-06 14:25:17.948874] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.146 14:25:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:11.146 14:25:17 -- target/connect_disconnect.sh@34 -- # set +x 00:11:13.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.673 14:29:02 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:55.673 14:29:02 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:55.673 14:29:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:55.673 14:29:02 -- nvmf/common.sh@116 -- # sync 00:14:55.673 14:29:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:55.673 14:29:02 -- nvmf/common.sh@119 -- # set +e 00:14:55.673 14:29:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:55.673 14:29:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:55.673 rmmod nvme_tcp 00:14:55.673 rmmod nvme_fabrics 00:14:55.673 rmmod nvme_keyring 00:14:55.673 14:29:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:55.673 14:29:02 -- nvmf/common.sh@123 -- # set -e 00:14:55.673 14:29:02 -- nvmf/common.sh@124 -- # return 0 00:14:55.673 14:29:02 -- nvmf/common.sh@477 -- # '[' -n 62455 ']' 00:14:55.673 14:29:02 -- nvmf/common.sh@478 -- # killprocess 62455 00:14:55.673 14:29:02 -- common/autotest_common.sh@936 -- # '[' -z 62455 ']' 00:14:55.673 14:29:02 -- common/autotest_common.sh@940 -- # kill -0 62455 00:14:55.673 14:29:02 -- common/autotest_common.sh@941 -- # uname 00:14:55.673 14:29:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.673 14:29:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62455 00:14:55.673 killing process with pid 62455 00:14:55.673 14:29:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:55.673 14:29:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:55.673 14:29:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62455' 00:14:55.673 14:29:02 -- common/autotest_common.sh@955 -- # kill 62455 00:14:55.673 14:29:02 -- common/autotest_common.sh@960 -- # wait 62455 00:14:56.239 14:29:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.239 14:29:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.239 14:29:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.239 14:29:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.239 14:29:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.239 14:29:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.239 14:29:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.239 14:29:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.239 14:29:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:56.239 00:14:56.239 real 3m46.889s 00:14:56.239 user 14m42.156s 00:14:56.239 sys 0m23.264s 00:14:56.239 14:29:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:56.239 14:29:02 -- common/autotest_common.sh@10 -- # set +x 00:14:56.239 ************************************ 00:14:56.239 END TEST nvmf_connect_disconnect 00:14:56.239 ************************************ 00:14:56.239 14:29:02 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:56.240 14:29:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.240 14:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.240 14:29:02 -- common/autotest_common.sh@10 -- # set +x 00:14:56.240 ************************************ 00:14:56.240 START TEST nvmf_multitarget 00:14:56.240 ************************************ 00:14:56.240 14:29:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:56.240 * Looking for test storage... 00:14:56.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.240 14:29:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:56.240 14:29:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:56.240 14:29:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:56.240 14:29:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:56.240 14:29:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:56.240 14:29:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:56.240 14:29:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:56.240 14:29:03 -- scripts/common.sh@335 -- # IFS=.-: 00:14:56.240 14:29:03 -- scripts/common.sh@335 -- # read -ra ver1 00:14:56.240 14:29:03 -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.240 14:29:03 -- scripts/common.sh@336 -- # read -ra ver2 00:14:56.240 14:29:03 -- scripts/common.sh@337 -- # local 'op=<' 00:14:56.240 14:29:03 -- scripts/common.sh@339 -- # ver1_l=2 00:14:56.240 14:29:03 -- scripts/common.sh@340 -- # ver2_l=1 00:14:56.240 14:29:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:56.240 14:29:03 -- scripts/common.sh@343 -- # case "$op" in 00:14:56.240 14:29:03 -- scripts/common.sh@344 -- # : 1 00:14:56.240 14:29:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:56.240 14:29:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.240 14:29:03 -- scripts/common.sh@364 -- # decimal 1 00:14:56.240 14:29:03 -- scripts/common.sh@352 -- # local d=1 00:14:56.240 14:29:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.240 14:29:03 -- scripts/common.sh@354 -- # echo 1 00:14:56.240 14:29:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:56.240 14:29:03 -- scripts/common.sh@365 -- # decimal 2 00:14:56.240 14:29:03 -- scripts/common.sh@352 -- # local d=2 00:14:56.240 14:29:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.240 14:29:03 -- scripts/common.sh@354 -- # echo 2 00:14:56.240 14:29:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:56.240 14:29:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:56.240 14:29:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:56.240 14:29:03 -- scripts/common.sh@367 -- # return 0 00:14:56.240 14:29:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.240 14:29:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:56.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.240 --rc genhtml_branch_coverage=1 00:14:56.240 --rc genhtml_function_coverage=1 00:14:56.240 --rc genhtml_legend=1 00:14:56.240 --rc geninfo_all_blocks=1 00:14:56.240 --rc geninfo_unexecuted_blocks=1 00:14:56.240 00:14:56.240 ' 00:14:56.240 14:29:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:56.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.240 --rc genhtml_branch_coverage=1 00:14:56.240 --rc genhtml_function_coverage=1 00:14:56.240 --rc genhtml_legend=1 00:14:56.240 --rc geninfo_all_blocks=1 00:14:56.240 --rc geninfo_unexecuted_blocks=1 00:14:56.240 00:14:56.240 ' 00:14:56.240 14:29:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:56.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.240 --rc genhtml_branch_coverage=1 00:14:56.240 --rc genhtml_function_coverage=1 00:14:56.240 --rc genhtml_legend=1 00:14:56.240 --rc geninfo_all_blocks=1 00:14:56.240 --rc geninfo_unexecuted_blocks=1 00:14:56.240 00:14:56.240 ' 00:14:56.240 14:29:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:56.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.240 --rc genhtml_branch_coverage=1 00:14:56.240 --rc genhtml_function_coverage=1 00:14:56.240 --rc genhtml_legend=1 00:14:56.240 --rc geninfo_all_blocks=1 00:14:56.240 --rc geninfo_unexecuted_blocks=1 00:14:56.240 00:14:56.240 ' 00:14:56.240 14:29:03 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.240 14:29:03 -- nvmf/common.sh@7 -- # uname -s 00:14:56.240 14:29:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.240 14:29:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.240 14:29:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.240 14:29:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.240 14:29:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.240 14:29:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.240 14:29:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.240 14:29:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.240 14:29:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.240 14:29:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.240 14:29:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:14:56.240 14:29:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:14:56.240 14:29:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.240 14:29:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.240 14:29:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.240 14:29:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.240 14:29:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.240 14:29:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.240 14:29:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.240 14:29:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.240 14:29:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.240 14:29:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.240 14:29:03 -- paths/export.sh@5 -- # export PATH 00:14:56.240 14:29:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.240 14:29:03 -- nvmf/common.sh@46 -- # : 0 00:14:56.240 14:29:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.240 14:29:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.240 14:29:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.240 14:29:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.240 14:29:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.240 14:29:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.240 14:29:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.240 14:29:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.240 14:29:03 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:56.240 14:29:03 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:56.240 14:29:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.240 14:29:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.240 14:29:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.240 14:29:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.240 14:29:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.240 14:29:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.240 14:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.240 14:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.499 14:29:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:56.499 14:29:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:56.499 14:29:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:56.499 14:29:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:56.499 14:29:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:56.499 14:29:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:56.499 14:29:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.499 14:29:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.499 14:29:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.499 14:29:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:56.499 14:29:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.499 14:29:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.499 14:29:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.499 14:29:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.499 14:29:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.499 14:29:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.499 14:29:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.499 14:29:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.499 14:29:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:56.499 14:29:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:56.499 Cannot find device "nvmf_tgt_br" 00:14:56.499 14:29:03 -- nvmf/common.sh@154 -- # true 00:14:56.499 14:29:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.499 Cannot find device "nvmf_tgt_br2" 00:14:56.499 14:29:03 -- nvmf/common.sh@155 -- # true 00:14:56.499 14:29:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:56.499 14:29:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:56.499 Cannot find device "nvmf_tgt_br" 00:14:56.499 14:29:03 -- nvmf/common.sh@157 -- # true 00:14:56.499 14:29:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:56.499 Cannot find device "nvmf_tgt_br2" 00:14:56.499 14:29:03 -- nvmf/common.sh@158 -- # true 00:14:56.499 14:29:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:56.499 14:29:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:56.499 14:29:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.499 14:29:03 -- nvmf/common.sh@161 -- # true 00:14:56.499 14:29:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.499 14:29:03 -- nvmf/common.sh@162 -- # true 00:14:56.499 14:29:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.499 14:29:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.499 14:29:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.499 14:29:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.499 14:29:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.499 14:29:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.499 14:29:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.499 14:29:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.499 14:29:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.499 14:29:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:56.499 14:29:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:56.499 14:29:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:56.499 14:29:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:56.499 14:29:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.757 14:29:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.757 14:29:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.757 14:29:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:56.757 14:29:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:56.757 14:29:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.757 14:29:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.757 14:29:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.757 14:29:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.757 14:29:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.757 14:29:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:56.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:14:56.757 00:14:56.757 --- 10.0.0.2 ping statistics --- 00:14:56.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.758 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:56.758 14:29:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:56.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:56.758 00:14:56.758 --- 10.0.0.3 ping statistics --- 00:14:56.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.758 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:56.758 14:29:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:56.758 00:14:56.758 --- 10.0.0.1 ping statistics --- 00:14:56.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.758 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:56.758 14:29:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.758 14:29:03 -- nvmf/common.sh@421 -- # return 0 00:14:56.758 14:29:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.758 14:29:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.758 14:29:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.758 14:29:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.758 14:29:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.758 14:29:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.758 14:29:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.758 14:29:03 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:56.758 14:29:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:56.758 14:29:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.758 14:29:03 -- common/autotest_common.sh@10 -- # set +x 00:14:56.758 14:29:03 -- nvmf/common.sh@469 -- # nvmfpid=66232 00:14:56.758 14:29:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.758 14:29:03 -- nvmf/common.sh@470 -- # waitforlisten 66232 00:14:56.758 14:29:03 -- common/autotest_common.sh@829 -- # '[' -z 66232 ']' 00:14:56.758 14:29:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.758 14:29:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.758 14:29:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.758 14:29:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.758 14:29:03 -- common/autotest_common.sh@10 -- # set +x 00:14:56.758 [2024-12-06 14:29:03.647866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:56.758 [2024-12-06 14:29:03.647974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.015 [2024-12-06 14:29:03.789797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.015 [2024-12-06 14:29:03.925091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.015 [2024-12-06 14:29:03.925567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.015 [2024-12-06 14:29:03.925774] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.015 [2024-12-06 14:29:03.925927] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.015 [2024-12-06 14:29:03.926199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.015 [2024-12-06 14:29:03.926485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.015 [2024-12-06 14:29:03.926483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.015 [2024-12-06 14:29:03.926371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.951 14:29:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.951 14:29:04 -- common/autotest_common.sh@862 -- # return 0 00:14:57.951 14:29:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:57.951 14:29:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.951 14:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:57.951 14:29:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.951 14:29:04 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:57.951 14:29:04 -- target/multitarget.sh@21 -- # jq length 00:14:57.951 14:29:04 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:57.951 14:29:04 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:57.951 14:29:04 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:58.209 "nvmf_tgt_1" 00:14:58.209 14:29:04 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:58.209 "nvmf_tgt_2" 00:14:58.209 14:29:05 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:58.209 14:29:05 -- target/multitarget.sh@28 -- # jq length 00:14:58.466 14:29:05 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:58.466 14:29:05 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:58.466 true 00:14:58.723 14:29:05 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:58.723 true 00:14:58.723 14:29:05 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:58.723 14:29:05 -- target/multitarget.sh@35 -- # jq length 00:14:58.981 14:29:05 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:58.981 14:29:05 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:58.981 14:29:05 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:58.981 14:29:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:58.981 14:29:05 -- nvmf/common.sh@116 -- # sync 00:14:58.981 14:29:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:58.981 14:29:05 -- nvmf/common.sh@119 -- # set +e 00:14:58.981 14:29:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:58.981 14:29:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:58.981 rmmod nvme_tcp 00:14:58.981 rmmod nvme_fabrics 00:14:58.981 rmmod nvme_keyring 00:14:58.981 14:29:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:58.981 14:29:05 -- nvmf/common.sh@123 -- # set -e 00:14:58.981 14:29:05 -- nvmf/common.sh@124 -- # return 0 00:14:58.981 14:29:05 -- nvmf/common.sh@477 -- # '[' -n 66232 ']' 00:14:58.981 14:29:05 -- nvmf/common.sh@478 -- # killprocess 66232 00:14:58.981 14:29:05 -- common/autotest_common.sh@936 -- # '[' -z 66232 ']' 00:14:58.981 14:29:05 -- common/autotest_common.sh@940 -- # kill -0 66232 00:14:58.981 14:29:05 -- common/autotest_common.sh@941 -- # uname 00:14:58.981 14:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.981 14:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66232 00:14:58.981 killing process with pid 66232 00:14:58.981 14:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:58.981 14:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:58.981 14:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66232' 00:14:58.981 14:29:05 -- common/autotest_common.sh@955 -- # kill 66232 00:14:58.981 14:29:05 -- common/autotest_common.sh@960 -- # wait 66232 00:14:59.238 14:29:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:59.238 14:29:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:59.238 14:29:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:59.238 14:29:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.238 14:29:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:59.238 14:29:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.238 14:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.238 14:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.238 14:29:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:59.238 ************************************ 00:14:59.238 END TEST nvmf_multitarget 00:14:59.238 ************************************ 00:14:59.238 00:14:59.238 real 0m3.192s 00:14:59.238 user 0m10.153s 00:14:59.238 sys 0m0.787s 00:14:59.238 14:29:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:59.238 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:14:59.497 14:29:06 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:59.497 14:29:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:59.497 14:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.497 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:14:59.497 ************************************ 00:14:59.497 START TEST nvmf_rpc 00:14:59.497 ************************************ 00:14:59.497 14:29:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:59.497 * Looking for test storage... 00:14:59.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:59.497 14:29:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:59.497 14:29:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:59.497 14:29:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:59.497 14:29:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:59.497 14:29:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:59.497 14:29:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:59.497 14:29:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:59.497 14:29:06 -- scripts/common.sh@335 -- # IFS=.-: 00:14:59.497 14:29:06 -- scripts/common.sh@335 -- # read -ra ver1 00:14:59.497 14:29:06 -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.497 14:29:06 -- scripts/common.sh@336 -- # read -ra ver2 00:14:59.497 14:29:06 -- scripts/common.sh@337 -- # local 'op=<' 00:14:59.497 14:29:06 -- scripts/common.sh@339 -- # ver1_l=2 00:14:59.497 14:29:06 -- scripts/common.sh@340 -- # ver2_l=1 00:14:59.497 14:29:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:59.497 14:29:06 -- scripts/common.sh@343 -- # case "$op" in 00:14:59.497 14:29:06 -- scripts/common.sh@344 -- # : 1 00:14:59.497 14:29:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:59.497 14:29:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.497 14:29:06 -- scripts/common.sh@364 -- # decimal 1 00:14:59.497 14:29:06 -- scripts/common.sh@352 -- # local d=1 00:14:59.497 14:29:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.497 14:29:06 -- scripts/common.sh@354 -- # echo 1 00:14:59.497 14:29:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:59.497 14:29:06 -- scripts/common.sh@365 -- # decimal 2 00:14:59.497 14:29:06 -- scripts/common.sh@352 -- # local d=2 00:14:59.497 14:29:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.497 14:29:06 -- scripts/common.sh@354 -- # echo 2 00:14:59.497 14:29:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:59.497 14:29:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:59.497 14:29:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:59.497 14:29:06 -- scripts/common.sh@367 -- # return 0 00:14:59.497 14:29:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.497 14:29:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.497 --rc genhtml_branch_coverage=1 00:14:59.497 --rc genhtml_function_coverage=1 00:14:59.497 --rc genhtml_legend=1 00:14:59.497 --rc geninfo_all_blocks=1 00:14:59.497 --rc geninfo_unexecuted_blocks=1 00:14:59.497 00:14:59.497 ' 00:14:59.497 14:29:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.497 --rc genhtml_branch_coverage=1 00:14:59.497 --rc genhtml_function_coverage=1 00:14:59.497 --rc genhtml_legend=1 00:14:59.497 --rc geninfo_all_blocks=1 00:14:59.497 --rc geninfo_unexecuted_blocks=1 00:14:59.497 00:14:59.497 ' 00:14:59.497 14:29:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.497 --rc genhtml_branch_coverage=1 00:14:59.497 --rc genhtml_function_coverage=1 00:14:59.497 --rc genhtml_legend=1 00:14:59.497 --rc geninfo_all_blocks=1 00:14:59.497 --rc geninfo_unexecuted_blocks=1 00:14:59.497 00:14:59.497 ' 00:14:59.497 14:29:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.497 --rc genhtml_branch_coverage=1 00:14:59.497 --rc genhtml_function_coverage=1 00:14:59.497 --rc genhtml_legend=1 00:14:59.498 --rc geninfo_all_blocks=1 00:14:59.498 --rc geninfo_unexecuted_blocks=1 00:14:59.498 00:14:59.498 ' 00:14:59.498 14:29:06 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.498 14:29:06 -- nvmf/common.sh@7 -- # uname -s 00:14:59.498 14:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.498 14:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.498 14:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.498 14:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.498 14:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.498 14:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.498 14:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.498 14:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.498 14:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.498 14:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.498 14:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:14:59.498 14:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:14:59.498 14:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.498 14:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.498 14:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.498 14:29:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.498 14:29:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.498 14:29:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.498 14:29:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.498 14:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.498 14:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.498 14:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.498 14:29:06 -- paths/export.sh@5 -- # export PATH 00:14:59.498 14:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.498 14:29:06 -- nvmf/common.sh@46 -- # : 0 00:14:59.498 14:29:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:59.498 14:29:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:59.498 14:29:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:59.498 14:29:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.498 14:29:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.498 14:29:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:59.498 14:29:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:59.498 14:29:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:59.498 14:29:06 -- target/rpc.sh@11 -- # loops=5 00:14:59.498 14:29:06 -- target/rpc.sh@23 -- # nvmftestinit 00:14:59.498 14:29:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:59.498 14:29:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.498 14:29:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:59.498 14:29:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:59.498 14:29:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:59.498 14:29:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.498 14:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.498 14:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.498 14:29:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:59.498 14:29:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:59.498 14:29:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:59.498 14:29:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:59.498 14:29:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:59.498 14:29:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:59.498 14:29:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.498 14:29:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.498 14:29:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:59.498 14:29:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:59.498 14:29:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.498 14:29:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.498 14:29:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.498 14:29:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.498 14:29:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.498 14:29:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.498 14:29:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.498 14:29:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.498 14:29:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:59.757 14:29:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:59.757 Cannot find device "nvmf_tgt_br" 00:14:59.757 14:29:06 -- nvmf/common.sh@154 -- # true 00:14:59.757 14:29:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.757 Cannot find device "nvmf_tgt_br2" 00:14:59.757 14:29:06 -- nvmf/common.sh@155 -- # true 00:14:59.757 14:29:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:59.757 14:29:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:59.757 Cannot find device "nvmf_tgt_br" 00:14:59.757 14:29:06 -- nvmf/common.sh@157 -- # true 00:14:59.757 14:29:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:59.757 Cannot find device "nvmf_tgt_br2" 00:14:59.757 14:29:06 -- nvmf/common.sh@158 -- # true 00:14:59.757 14:29:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:59.757 14:29:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:59.757 14:29:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.757 14:29:06 -- nvmf/common.sh@161 -- # true 00:14:59.757 14:29:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.757 14:29:06 -- nvmf/common.sh@162 -- # true 00:14:59.757 14:29:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.757 14:29:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.757 14:29:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.757 14:29:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.757 14:29:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.757 14:29:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.757 14:29:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.757 14:29:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.757 14:29:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:59.757 14:29:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:59.757 14:29:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:59.757 14:29:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:59.757 14:29:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:59.757 14:29:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.757 14:29:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.757 14:29:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.757 14:29:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:59.757 14:29:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:00.016 14:29:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.016 14:29:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.016 14:29:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.016 14:29:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.016 14:29:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.016 14:29:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:00.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:15:00.016 00:15:00.016 --- 10.0.0.2 ping statistics --- 00:15:00.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.016 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:00.016 14:29:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:00.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:00.016 00:15:00.016 --- 10.0.0.3 ping statistics --- 00:15:00.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.016 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:00.016 14:29:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:00.016 00:15:00.016 --- 10.0.0.1 ping statistics --- 00:15:00.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.016 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:00.016 14:29:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.016 14:29:06 -- nvmf/common.sh@421 -- # return 0 00:15:00.016 14:29:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:00.016 14:29:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.016 14:29:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:00.016 14:29:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:00.016 14:29:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.016 14:29:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:00.016 14:29:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:00.016 14:29:06 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:00.016 14:29:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:00.016 14:29:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.016 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.016 14:29:06 -- nvmf/common.sh@469 -- # nvmfpid=66478 00:15:00.016 14:29:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.016 14:29:06 -- nvmf/common.sh@470 -- # waitforlisten 66478 00:15:00.016 14:29:06 -- common/autotest_common.sh@829 -- # '[' -z 66478 ']' 00:15:00.016 14:29:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.016 14:29:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.017 14:29:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.017 14:29:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.017 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.017 [2024-12-06 14:29:06.888760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:00.017 [2024-12-06 14:29:06.888868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.275 [2024-12-06 14:29:07.030119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.275 [2024-12-06 14:29:07.192813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:00.275 [2024-12-06 14:29:07.193255] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.275 [2024-12-06 14:29:07.193357] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.275 [2024-12-06 14:29:07.193503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.275 [2024-12-06 14:29:07.193718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.275 [2024-12-06 14:29:07.193925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.275 [2024-12-06 14:29:07.194154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.275 [2024-12-06 14:29:07.194159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.208 14:29:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.208 14:29:07 -- common/autotest_common.sh@862 -- # return 0 00:15:01.209 14:29:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:01.209 14:29:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.209 14:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:01.209 14:29:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.209 14:29:07 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:01.209 14:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.209 14:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:01.209 14:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.209 14:29:07 -- target/rpc.sh@26 -- # stats='{ 00:15:01.209 "poll_groups": [ 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_0", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [] 00:15:01.209 }, 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_1", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [] 00:15:01.209 }, 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_2", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [] 00:15:01.209 }, 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_3", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [] 00:15:01.209 } 00:15:01.209 ], 00:15:01.209 "tick_rate": 2200000000 00:15:01.209 }' 00:15:01.209 14:29:07 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:01.209 14:29:07 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:01.209 14:29:07 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:01.209 14:29:07 -- target/rpc.sh@15 -- # wc -l 00:15:01.209 14:29:08 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:01.209 14:29:08 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:01.209 14:29:08 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:01.209 14:29:08 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.209 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.209 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.209 [2024-12-06 14:29:08.072543] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.209 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.209 14:29:08 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:01.209 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.209 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.209 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.209 14:29:08 -- target/rpc.sh@33 -- # stats='{ 00:15:01.209 "poll_groups": [ 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_0", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [ 00:15:01.209 { 00:15:01.209 "trtype": "TCP" 00:15:01.209 } 00:15:01.209 ] 00:15:01.209 }, 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_1", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [ 00:15:01.209 { 00:15:01.209 "trtype": "TCP" 00:15:01.209 } 00:15:01.209 ] 00:15:01.209 }, 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_2", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [ 00:15:01.209 { 00:15:01.209 "trtype": "TCP" 00:15:01.209 } 00:15:01.209 ] 00:15:01.209 }, 00:15:01.209 { 00:15:01.209 "admin_qpairs": 0, 00:15:01.209 "completed_nvme_io": 0, 00:15:01.209 "current_admin_qpairs": 0, 00:15:01.209 "current_io_qpairs": 0, 00:15:01.209 "io_qpairs": 0, 00:15:01.209 "name": "nvmf_tgt_poll_group_3", 00:15:01.209 "pending_bdev_io": 0, 00:15:01.209 "transports": [ 00:15:01.209 { 00:15:01.209 "trtype": "TCP" 00:15:01.209 } 00:15:01.209 ] 00:15:01.209 } 00:15:01.209 ], 00:15:01.209 "tick_rate": 2200000000 00:15:01.209 }' 00:15:01.209 14:29:08 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:01.209 14:29:08 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:01.209 14:29:08 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:01.209 14:29:08 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:01.209 14:29:08 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:01.209 14:29:08 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:01.209 14:29:08 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:01.467 14:29:08 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:01.467 14:29:08 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:01.467 14:29:08 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:01.467 14:29:08 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:01.467 14:29:08 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:01.467 14:29:08 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:01.467 14:29:08 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:01.467 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.467 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.467 Malloc1 00:15:01.467 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.467 14:29:08 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:01.467 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.467 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.467 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.467 14:29:08 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.467 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.467 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.467 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.467 14:29:08 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:01.467 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.467 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.467 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.467 14:29:08 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.467 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.467 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.467 [2024-12-06 14:29:08.319770] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.467 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.467 14:29:08 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d -a 10.0.0.2 -s 4420 00:15:01.467 14:29:08 -- common/autotest_common.sh@650 -- # local es=0 00:15:01.467 14:29:08 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d -a 10.0.0.2 -s 4420 00:15:01.467 14:29:08 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:01.467 14:29:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.467 14:29:08 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:01.467 14:29:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.467 14:29:08 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:01.468 14:29:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.468 14:29:08 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:01.468 14:29:08 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:01.468 14:29:08 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d -a 10.0.0.2 -s 4420 00:15:01.468 [2024-12-06 14:29:08.347994] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d' 00:15:01.468 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:01.468 could not add new controller: failed to write to nvme-fabrics device 00:15:01.468 14:29:08 -- common/autotest_common.sh@653 -- # es=1 00:15:01.468 14:29:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.468 14:29:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.468 14:29:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.468 14:29:08 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:01.468 14:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.468 14:29:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 14:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.468 14:29:08 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.726 14:29:08 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.726 14:29:08 -- common/autotest_common.sh@1187 -- # local i=0 00:15:01.726 14:29:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.726 14:29:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:01.726 14:29:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:03.627 14:29:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:03.627 14:29:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:03.627 14:29:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.627 14:29:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:03.627 14:29:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.627 14:29:10 -- common/autotest_common.sh@1197 -- # return 0 00:15:03.627 14:29:10 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.886 14:29:10 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.886 14:29:10 -- common/autotest_common.sh@1208 -- # local i=0 00:15:03.886 14:29:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:03.886 14:29:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.886 14:29:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:03.886 14:29:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.886 14:29:10 -- common/autotest_common.sh@1220 -- # return 0 00:15:03.886 14:29:10 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:03.886 14:29:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.886 14:29:10 -- common/autotest_common.sh@10 -- # set +x 00:15:03.886 14:29:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.886 14:29:10 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.886 14:29:10 -- common/autotest_common.sh@650 -- # local es=0 00:15:03.886 14:29:10 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.886 14:29:10 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:03.886 14:29:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.886 14:29:10 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:03.886 14:29:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.886 14:29:10 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:03.886 14:29:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.886 14:29:10 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:03.886 14:29:10 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:03.886 14:29:10 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.886 [2024-12-06 14:29:10.672867] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d' 00:15:03.886 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:03.886 could not add new controller: failed to write to nvme-fabrics device 00:15:03.886 14:29:10 -- common/autotest_common.sh@653 -- # es=1 00:15:03.886 14:29:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:03.886 14:29:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:03.886 14:29:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:03.886 14:29:10 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:03.886 14:29:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.886 14:29:10 -- common/autotest_common.sh@10 -- # set +x 00:15:03.886 14:29:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.886 14:29:10 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:04.144 14:29:10 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.144 14:29:10 -- common/autotest_common.sh@1187 -- # local i=0 00:15:04.144 14:29:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.144 14:29:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:04.144 14:29:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:06.104 14:29:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:06.104 14:29:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:06.104 14:29:12 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.104 14:29:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:06.104 14:29:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.104 14:29:12 -- common/autotest_common.sh@1197 -- # return 0 00:15:06.104 14:29:12 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.104 14:29:12 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:06.104 14:29:12 -- common/autotest_common.sh@1208 -- # local i=0 00:15:06.104 14:29:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:06.104 14:29:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.104 14:29:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:06.104 14:29:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.104 14:29:12 -- common/autotest_common.sh@1220 -- # return 0 00:15:06.104 14:29:12 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.104 14:29:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.104 14:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.104 14:29:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.104 14:29:12 -- target/rpc.sh@81 -- # seq 1 5 00:15:06.104 14:29:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:06.104 14:29:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:06.104 14:29:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.104 14:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.104 14:29:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.104 14:29:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.104 14:29:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.104 14:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.104 [2024-12-06 14:29:12.970934] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.104 14:29:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.104 14:29:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:06.104 14:29:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.104 14:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.104 14:29:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.104 14:29:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:06.104 14:29:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.104 14:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.104 14:29:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.104 14:29:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.362 14:29:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.362 14:29:13 -- common/autotest_common.sh@1187 -- # local i=0 00:15:06.362 14:29:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.362 14:29:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:06.362 14:29:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:08.266 14:29:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:08.266 14:29:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:08.266 14:29:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.266 14:29:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:08.266 14:29:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.266 14:29:15 -- common/autotest_common.sh@1197 -- # return 0 00:15:08.266 14:29:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.266 14:29:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.266 14:29:15 -- common/autotest_common.sh@1208 -- # local i=0 00:15:08.266 14:29:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:08.266 14:29:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.266 14:29:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:08.266 14:29:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.524 14:29:15 -- common/autotest_common.sh@1220 -- # return 0 00:15:08.524 14:29:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.524 14:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.524 14:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.524 14:29:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.524 14:29:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.524 14:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.524 14:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.524 14:29:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.524 14:29:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:08.524 14:29:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:08.524 14:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.524 14:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.524 14:29:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.524 14:29:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.524 14:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.524 14:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.524 [2024-12-06 14:29:15.263895] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.524 14:29:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.524 14:29:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:08.524 14:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.524 14:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.524 14:29:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.524 14:29:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:08.524 14:29:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.524 14:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.524 14:29:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.524 14:29:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.524 14:29:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.524 14:29:15 -- common/autotest_common.sh@1187 -- # local i=0 00:15:08.524 14:29:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.524 14:29:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:08.524 14:29:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:11.057 14:29:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:11.057 14:29:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:11.057 14:29:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.057 14:29:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:11.057 14:29:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.057 14:29:17 -- common/autotest_common.sh@1197 -- # return 0 00:15:11.057 14:29:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.057 14:29:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.057 14:29:17 -- common/autotest_common.sh@1208 -- # local i=0 00:15:11.057 14:29:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:11.057 14:29:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.057 14:29:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.057 14:29:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:11.057 14:29:17 -- common/autotest_common.sh@1220 -- # return 0 00:15:11.057 14:29:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.057 14:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.057 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.057 14:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.057 14:29:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.057 14:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.057 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.057 14:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.057 14:29:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:11.057 14:29:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.057 14:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.057 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.057 14:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.057 14:29:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.057 14:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.057 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.057 [2024-12-06 14:29:17.582773] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.057 14:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.057 14:29:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:11.057 14:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.057 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.057 14:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.057 14:29:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.057 14:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.057 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.057 14:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.057 14:29:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.057 14:29:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.057 14:29:17 -- common/autotest_common.sh@1187 -- # local i=0 00:15:11.057 14:29:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.057 14:29:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:11.057 14:29:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:12.959 14:29:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:12.959 14:29:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:12.959 14:29:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.959 14:29:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:12.959 14:29:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.959 14:29:19 -- common/autotest_common.sh@1197 -- # return 0 00:15:12.959 14:29:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.218 14:29:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.218 14:29:19 -- common/autotest_common.sh@1208 -- # local i=0 00:15:13.218 14:29:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:13.218 14:29:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.218 14:29:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:13.218 14:29:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.218 14:29:19 -- common/autotest_common.sh@1220 -- # return 0 00:15:13.218 14:29:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.218 14:29:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.218 14:29:19 -- common/autotest_common.sh@10 -- # set +x 00:15:13.218 14:29:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.218 14:29:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.218 14:29:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.218 14:29:19 -- common/autotest_common.sh@10 -- # set +x 00:15:13.218 14:29:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.218 14:29:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:13.218 14:29:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:13.218 14:29:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.218 14:29:19 -- common/autotest_common.sh@10 -- # set +x 00:15:13.218 14:29:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.218 14:29:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.218 14:29:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.218 14:29:19 -- common/autotest_common.sh@10 -- # set +x 00:15:13.218 [2024-12-06 14:29:19.996473] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.218 14:29:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.218 14:29:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:13.218 14:29:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.218 14:29:20 -- common/autotest_common.sh@10 -- # set +x 00:15:13.218 14:29:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.218 14:29:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:13.218 14:29:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.218 14:29:20 -- common/autotest_common.sh@10 -- # set +x 00:15:13.218 14:29:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.218 14:29:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.218 14:29:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.218 14:29:20 -- common/autotest_common.sh@1187 -- # local i=0 00:15:13.219 14:29:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.219 14:29:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:13.219 14:29:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:15.774 14:29:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:15.774 14:29:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:15.774 14:29:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.774 14:29:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:15.774 14:29:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.774 14:29:22 -- common/autotest_common.sh@1197 -- # return 0 00:15:15.774 14:29:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.774 14:29:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.774 14:29:22 -- common/autotest_common.sh@1208 -- # local i=0 00:15:15.774 14:29:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:15.774 14:29:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.774 14:29:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:15.774 14:29:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.774 14:29:22 -- common/autotest_common.sh@1220 -- # return 0 00:15:15.774 14:29:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.774 14:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.774 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:15.774 14:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.774 14:29:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.774 14:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.774 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:15.774 14:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.774 14:29:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:15.774 14:29:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.774 14:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.774 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:15.774 14:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.774 14:29:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.774 14:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.774 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:15.774 [2024-12-06 14:29:22.301807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.774 14:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.774 14:29:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:15.774 14:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.774 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:15.774 14:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.774 14:29:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.774 14:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.774 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:15.774 14:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.774 14:29:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.774 14:29:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.774 14:29:22 -- common/autotest_common.sh@1187 -- # local i=0 00:15:15.774 14:29:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.774 14:29:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:15.774 14:29:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:17.677 14:29:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:17.677 14:29:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:17.677 14:29:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.677 14:29:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:17.677 14:29:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.677 14:29:24 -- common/autotest_common.sh@1197 -- # return 0 00:15:17.677 14:29:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.677 14:29:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.677 14:29:24 -- common/autotest_common.sh@1208 -- # local i=0 00:15:17.677 14:29:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:17.677 14:29:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.677 14:29:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:17.677 14:29:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.677 14:29:24 -- common/autotest_common.sh@1220 -- # return 0 00:15:17.677 14:29:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.677 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.677 14:29:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.677 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.677 14:29:24 -- target/rpc.sh@99 -- # seq 1 5 00:15:17.677 14:29:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:17.677 14:29:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.677 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.677 14:29:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.677 [2024-12-06 14:29:24.622923] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.677 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.677 14:29:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.677 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.677 14:29:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.677 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.677 14:29:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.677 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.677 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:17.938 14:29:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 [2024-12-06 14:29:24.671001] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 14:29:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:17.938 14:29:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.938 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 [2024-12-06 14:29:24.723011] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:17.939 14:29:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 [2024-12-06 14:29:24.771131] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:17.939 14:29:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 [2024-12-06 14:29:24.819223] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:17.939 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.939 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.939 14:29:24 -- target/rpc.sh@110 -- # stats='{ 00:15:17.939 "poll_groups": [ 00:15:17.939 { 00:15:17.939 "admin_qpairs": 2, 00:15:17.939 "completed_nvme_io": 68, 00:15:17.939 "current_admin_qpairs": 0, 00:15:17.939 "current_io_qpairs": 0, 00:15:17.939 "io_qpairs": 16, 00:15:17.939 "name": "nvmf_tgt_poll_group_0", 00:15:17.939 "pending_bdev_io": 0, 00:15:17.939 "transports": [ 00:15:17.939 { 00:15:17.939 "trtype": "TCP" 00:15:17.939 } 00:15:17.939 ] 00:15:17.939 }, 00:15:17.939 { 00:15:17.939 "admin_qpairs": 3, 00:15:17.939 "completed_nvme_io": 118, 00:15:17.939 "current_admin_qpairs": 0, 00:15:17.939 "current_io_qpairs": 0, 00:15:17.939 "io_qpairs": 17, 00:15:17.939 "name": "nvmf_tgt_poll_group_1", 00:15:17.939 "pending_bdev_io": 0, 00:15:17.939 "transports": [ 00:15:17.939 { 00:15:17.939 "trtype": "TCP" 00:15:17.939 } 00:15:17.939 ] 00:15:17.939 }, 00:15:17.939 { 00:15:17.939 "admin_qpairs": 1, 00:15:17.939 "completed_nvme_io": 167, 00:15:17.939 "current_admin_qpairs": 0, 00:15:17.939 "current_io_qpairs": 0, 00:15:17.939 "io_qpairs": 19, 00:15:17.939 "name": "nvmf_tgt_poll_group_2", 00:15:17.939 "pending_bdev_io": 0, 00:15:17.939 "transports": [ 00:15:17.939 { 00:15:17.939 "trtype": "TCP" 00:15:17.939 } 00:15:17.939 ] 00:15:17.939 }, 00:15:17.939 { 00:15:17.939 "admin_qpairs": 1, 00:15:17.939 "completed_nvme_io": 67, 00:15:17.939 "current_admin_qpairs": 0, 00:15:17.939 "current_io_qpairs": 0, 00:15:17.939 "io_qpairs": 18, 00:15:17.939 "name": "nvmf_tgt_poll_group_3", 00:15:17.939 "pending_bdev_io": 0, 00:15:17.939 "transports": [ 00:15:17.939 { 00:15:17.939 "trtype": "TCP" 00:15:17.939 } 00:15:17.939 ] 00:15:17.939 } 00:15:17.939 ], 00:15:17.939 "tick_rate": 2200000000 00:15:17.939 }' 00:15:17.939 14:29:24 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:17.939 14:29:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:17.939 14:29:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:17.939 14:29:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:18.199 14:29:24 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:18.199 14:29:24 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:18.199 14:29:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:18.199 14:29:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:18.199 14:29:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:18.199 14:29:24 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:15:18.199 14:29:24 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:18.199 14:29:24 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:18.199 14:29:24 -- target/rpc.sh@123 -- # nvmftestfini 00:15:18.199 14:29:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:18.199 14:29:24 -- nvmf/common.sh@116 -- # sync 00:15:18.199 14:29:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:18.199 14:29:25 -- nvmf/common.sh@119 -- # set +e 00:15:18.199 14:29:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:18.199 14:29:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:18.199 rmmod nvme_tcp 00:15:18.199 rmmod nvme_fabrics 00:15:18.199 rmmod nvme_keyring 00:15:18.199 14:29:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:18.199 14:29:25 -- nvmf/common.sh@123 -- # set -e 00:15:18.199 14:29:25 -- nvmf/common.sh@124 -- # return 0 00:15:18.199 14:29:25 -- nvmf/common.sh@477 -- # '[' -n 66478 ']' 00:15:18.199 14:29:25 -- nvmf/common.sh@478 -- # killprocess 66478 00:15:18.199 14:29:25 -- common/autotest_common.sh@936 -- # '[' -z 66478 ']' 00:15:18.199 14:29:25 -- common/autotest_common.sh@940 -- # kill -0 66478 00:15:18.199 14:29:25 -- common/autotest_common.sh@941 -- # uname 00:15:18.199 14:29:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.199 14:29:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66478 00:15:18.199 killing process with pid 66478 00:15:18.199 14:29:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.199 14:29:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.199 14:29:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66478' 00:15:18.199 14:29:25 -- common/autotest_common.sh@955 -- # kill 66478 00:15:18.199 14:29:25 -- common/autotest_common.sh@960 -- # wait 66478 00:15:18.457 14:29:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:18.457 14:29:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:18.457 14:29:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:18.457 14:29:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.457 14:29:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:18.457 14:29:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.457 14:29:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.457 14:29:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.457 14:29:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:18.714 00:15:18.714 real 0m19.178s 00:15:18.714 user 1m11.905s 00:15:18.714 sys 0m2.269s 00:15:18.714 14:29:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:18.714 14:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:18.714 ************************************ 00:15:18.714 END TEST nvmf_rpc 00:15:18.714 ************************************ 00:15:18.714 14:29:25 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:18.714 14:29:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:18.714 14:29:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.714 14:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:18.714 ************************************ 00:15:18.714 START TEST nvmf_invalid 00:15:18.714 ************************************ 00:15:18.714 14:29:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:18.714 * Looking for test storage... 00:15:18.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:18.715 14:29:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:18.715 14:29:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:18.715 14:29:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:18.715 14:29:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:18.715 14:29:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:18.715 14:29:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:18.715 14:29:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:18.715 14:29:25 -- scripts/common.sh@335 -- # IFS=.-: 00:15:18.715 14:29:25 -- scripts/common.sh@335 -- # read -ra ver1 00:15:18.715 14:29:25 -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.715 14:29:25 -- scripts/common.sh@336 -- # read -ra ver2 00:15:18.715 14:29:25 -- scripts/common.sh@337 -- # local 'op=<' 00:15:18.715 14:29:25 -- scripts/common.sh@339 -- # ver1_l=2 00:15:18.715 14:29:25 -- scripts/common.sh@340 -- # ver2_l=1 00:15:18.715 14:29:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:18.715 14:29:25 -- scripts/common.sh@343 -- # case "$op" in 00:15:18.715 14:29:25 -- scripts/common.sh@344 -- # : 1 00:15:18.715 14:29:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:18.715 14:29:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.715 14:29:25 -- scripts/common.sh@364 -- # decimal 1 00:15:18.715 14:29:25 -- scripts/common.sh@352 -- # local d=1 00:15:18.715 14:29:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.715 14:29:25 -- scripts/common.sh@354 -- # echo 1 00:15:18.715 14:29:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:18.715 14:29:25 -- scripts/common.sh@365 -- # decimal 2 00:15:18.715 14:29:25 -- scripts/common.sh@352 -- # local d=2 00:15:18.715 14:29:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.715 14:29:25 -- scripts/common.sh@354 -- # echo 2 00:15:18.715 14:29:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:18.715 14:29:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:18.715 14:29:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:18.715 14:29:25 -- scripts/common.sh@367 -- # return 0 00:15:18.715 14:29:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.715 14:29:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:18.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.715 --rc genhtml_branch_coverage=1 00:15:18.715 --rc genhtml_function_coverage=1 00:15:18.715 --rc genhtml_legend=1 00:15:18.715 --rc geninfo_all_blocks=1 00:15:18.715 --rc geninfo_unexecuted_blocks=1 00:15:18.715 00:15:18.715 ' 00:15:18.715 14:29:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:18.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.715 --rc genhtml_branch_coverage=1 00:15:18.715 --rc genhtml_function_coverage=1 00:15:18.715 --rc genhtml_legend=1 00:15:18.715 --rc geninfo_all_blocks=1 00:15:18.715 --rc geninfo_unexecuted_blocks=1 00:15:18.715 00:15:18.715 ' 00:15:18.715 14:29:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:18.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.715 --rc genhtml_branch_coverage=1 00:15:18.715 --rc genhtml_function_coverage=1 00:15:18.715 --rc genhtml_legend=1 00:15:18.715 --rc geninfo_all_blocks=1 00:15:18.715 --rc geninfo_unexecuted_blocks=1 00:15:18.715 00:15:18.715 ' 00:15:18.715 14:29:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:18.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.715 --rc genhtml_branch_coverage=1 00:15:18.715 --rc genhtml_function_coverage=1 00:15:18.715 --rc genhtml_legend=1 00:15:18.715 --rc geninfo_all_blocks=1 00:15:18.715 --rc geninfo_unexecuted_blocks=1 00:15:18.715 00:15:18.715 ' 00:15:18.715 14:29:25 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.715 14:29:25 -- nvmf/common.sh@7 -- # uname -s 00:15:18.715 14:29:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.715 14:29:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.715 14:29:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.715 14:29:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.715 14:29:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.715 14:29:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.715 14:29:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.715 14:29:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.715 14:29:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.715 14:29:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.973 14:29:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:18.973 14:29:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:18.973 14:29:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.973 14:29:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.973 14:29:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.973 14:29:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.973 14:29:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.973 14:29:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.973 14:29:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.973 14:29:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.973 14:29:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.974 14:29:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.974 14:29:25 -- paths/export.sh@5 -- # export PATH 00:15:18.974 14:29:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.974 14:29:25 -- nvmf/common.sh@46 -- # : 0 00:15:18.974 14:29:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:18.974 14:29:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:18.974 14:29:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:18.974 14:29:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.974 14:29:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.974 14:29:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:18.974 14:29:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:18.974 14:29:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:18.974 14:29:25 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:15:18.974 14:29:25 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.974 14:29:25 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:18.974 14:29:25 -- target/invalid.sh@14 -- # target=foobar 00:15:18.974 14:29:25 -- target/invalid.sh@16 -- # RANDOM=0 00:15:18.974 14:29:25 -- target/invalid.sh@34 -- # nvmftestinit 00:15:18.974 14:29:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:18.974 14:29:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.974 14:29:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:18.974 14:29:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:18.974 14:29:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:18.974 14:29:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.974 14:29:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.974 14:29:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.974 14:29:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:18.974 14:29:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:18.974 14:29:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:18.974 14:29:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:18.974 14:29:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:18.974 14:29:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:18.974 14:29:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.974 14:29:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.974 14:29:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:18.974 14:29:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:18.974 14:29:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.974 14:29:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.974 14:29:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.974 14:29:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.974 14:29:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.974 14:29:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.974 14:29:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.974 14:29:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.974 14:29:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:18.974 14:29:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:18.974 Cannot find device "nvmf_tgt_br" 00:15:18.974 14:29:25 -- nvmf/common.sh@154 -- # true 00:15:18.974 14:29:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.974 Cannot find device "nvmf_tgt_br2" 00:15:18.974 14:29:25 -- nvmf/common.sh@155 -- # true 00:15:18.974 14:29:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:18.974 14:29:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:18.974 Cannot find device "nvmf_tgt_br" 00:15:18.974 14:29:25 -- nvmf/common.sh@157 -- # true 00:15:18.974 14:29:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:18.974 Cannot find device "nvmf_tgt_br2" 00:15:18.974 14:29:25 -- nvmf/common.sh@158 -- # true 00:15:18.974 14:29:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:18.974 14:29:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:18.974 14:29:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.974 14:29:25 -- nvmf/common.sh@161 -- # true 00:15:18.974 14:29:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.974 14:29:25 -- nvmf/common.sh@162 -- # true 00:15:18.974 14:29:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.974 14:29:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.974 14:29:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.974 14:29:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.974 14:29:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.974 14:29:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.974 14:29:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.974 14:29:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:18.974 14:29:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:18.974 14:29:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:18.974 14:29:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:19.236 14:29:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:19.236 14:29:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:19.236 14:29:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.236 14:29:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.236 14:29:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.236 14:29:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:19.236 14:29:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:19.236 14:29:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.236 14:29:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.236 14:29:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.236 14:29:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.236 14:29:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.236 14:29:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:19.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:19.236 00:15:19.236 --- 10.0.0.2 ping statistics --- 00:15:19.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.236 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:19.236 14:29:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:19.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:19.236 00:15:19.236 --- 10.0.0.3 ping statistics --- 00:15:19.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.236 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:19.236 14:29:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:19.236 00:15:19.236 --- 10.0.0.1 ping statistics --- 00:15:19.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.236 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:19.236 14:29:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.236 14:29:26 -- nvmf/common.sh@421 -- # return 0 00:15:19.236 14:29:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:19.236 14:29:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.236 14:29:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:19.236 14:29:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:19.236 14:29:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.236 14:29:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:19.236 14:29:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:19.236 14:29:26 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:19.236 14:29:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:19.236 14:29:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.236 14:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:19.236 14:29:26 -- nvmf/common.sh@469 -- # nvmfpid=66998 00:15:19.236 14:29:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.236 14:29:26 -- nvmf/common.sh@470 -- # waitforlisten 66998 00:15:19.236 14:29:26 -- common/autotest_common.sh@829 -- # '[' -z 66998 ']' 00:15:19.236 14:29:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.236 14:29:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.236 14:29:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.236 14:29:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.236 14:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:19.236 [2024-12-06 14:29:26.133299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.236 [2024-12-06 14:29:26.133455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.495 [2024-12-06 14:29:26.275332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.495 [2024-12-06 14:29:26.395610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:19.495 [2024-12-06 14:29:26.396008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.495 [2024-12-06 14:29:26.396166] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.495 [2024-12-06 14:29:26.396322] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.495 [2024-12-06 14:29:26.396653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.495 [2024-12-06 14:29:26.396801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.495 [2024-12-06 14:29:26.396980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.495 [2024-12-06 14:29:26.396984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.435 14:29:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.435 14:29:27 -- common/autotest_common.sh@862 -- # return 0 00:15:20.435 14:29:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:20.435 14:29:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.435 14:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:20.435 14:29:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.435 14:29:27 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:20.435 14:29:27 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30860 00:15:20.693 [2024-12-06 14:29:27.512958] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:20.693 14:29:27 -- target/invalid.sh@40 -- # out='2024/12/06 14:29:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30860 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:15:20.693 request: 00:15:20.693 { 00:15:20.693 "method": "nvmf_create_subsystem", 00:15:20.693 "params": { 00:15:20.693 "nqn": "nqn.2016-06.io.spdk:cnode30860", 00:15:20.693 "tgt_name": "foobar" 00:15:20.693 } 00:15:20.693 } 00:15:20.693 Got JSON-RPC error response 00:15:20.693 GoRPCClient: error on JSON-RPC call' 00:15:20.693 14:29:27 -- target/invalid.sh@41 -- # [[ 2024/12/06 14:29:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30860 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:15:20.693 request: 00:15:20.693 { 00:15:20.693 "method": "nvmf_create_subsystem", 00:15:20.693 "params": { 00:15:20.693 "nqn": "nqn.2016-06.io.spdk:cnode30860", 00:15:20.693 "tgt_name": "foobar" 00:15:20.693 } 00:15:20.693 } 00:15:20.693 Got JSON-RPC error response 00:15:20.693 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:20.693 14:29:27 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:20.693 14:29:27 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15863 00:15:20.952 [2024-12-06 14:29:27.833452] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15863: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:20.952 14:29:27 -- target/invalid.sh@45 -- # out='2024/12/06 14:29:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15863 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:15:20.952 request: 00:15:20.952 { 00:15:20.952 "method": "nvmf_create_subsystem", 00:15:20.952 "params": { 00:15:20.952 "nqn": "nqn.2016-06.io.spdk:cnode15863", 00:15:20.952 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:15:20.952 } 00:15:20.952 } 00:15:20.952 Got JSON-RPC error response 00:15:20.952 GoRPCClient: error on JSON-RPC call' 00:15:20.952 14:29:27 -- target/invalid.sh@46 -- # [[ 2024/12/06 14:29:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15863 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:15:20.952 request: 00:15:20.952 { 00:15:20.952 "method": "nvmf_create_subsystem", 00:15:20.952 "params": { 00:15:20.952 "nqn": "nqn.2016-06.io.spdk:cnode15863", 00:15:20.952 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:15:20.952 } 00:15:20.952 } 00:15:20.952 Got JSON-RPC error response 00:15:20.952 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:20.952 14:29:27 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:20.952 14:29:27 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26742 00:15:21.211 [2024-12-06 14:29:28.121831] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26742: invalid model number 'SPDK_Controller' 00:15:21.211 14:29:28 -- target/invalid.sh@50 -- # out='2024/12/06 14:29:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode26742], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:15:21.211 request: 00:15:21.211 { 00:15:21.211 "method": "nvmf_create_subsystem", 00:15:21.211 "params": { 00:15:21.211 "nqn": "nqn.2016-06.io.spdk:cnode26742", 00:15:21.211 "model_number": "SPDK_Controller\u001f" 00:15:21.211 } 00:15:21.211 } 00:15:21.211 Got JSON-RPC error response 00:15:21.211 GoRPCClient: error on JSON-RPC call' 00:15:21.211 14:29:28 -- target/invalid.sh@51 -- # [[ 2024/12/06 14:29:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode26742], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:15:21.211 request: 00:15:21.211 { 00:15:21.211 "method": "nvmf_create_subsystem", 00:15:21.211 "params": { 00:15:21.211 "nqn": "nqn.2016-06.io.spdk:cnode26742", 00:15:21.211 "model_number": "SPDK_Controller\u001f" 00:15:21.211 } 00:15:21.211 } 00:15:21.211 Got JSON-RPC error response 00:15:21.211 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:21.211 14:29:28 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:21.211 14:29:28 -- target/invalid.sh@19 -- # local length=21 ll 00:15:21.211 14:29:28 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.211 14:29:28 -- target/invalid.sh@21 -- # local chars 00:15:21.211 14:29:28 -- target/invalid.sh@22 -- # local string 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # printf %x 79 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # string+=O 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # printf %x 101 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # string+=e 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # printf %x 83 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # string+=S 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # printf %x 97 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:21.211 14:29:28 -- target/invalid.sh@25 -- # string+=a 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.211 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 62 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+='>' 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 72 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=H 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 112 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=p 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 127 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 112 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=p 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 99 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=c 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 125 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+='}' 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 33 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+='!' 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 48 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=0 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.469 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # printf %x 76 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:21.469 14:29:28 -- target/invalid.sh@25 -- # string+=L 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 55 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+=7 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 119 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+=w 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 64 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+=@ 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 96 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+='`' 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 51 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+=3 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 68 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+=D 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # printf %x 114 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:21.470 14:29:28 -- target/invalid.sh@25 -- # string+=r 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.470 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.470 14:29:28 -- target/invalid.sh@28 -- # [[ O == \- ]] 00:15:21.470 14:29:28 -- target/invalid.sh@31 -- # echo 'OeSa>Hppc}!0L7w@`3Dr' 00:15:21.470 14:29:28 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OeSa>Hppc}!0L7w@`3Dr' nqn.2016-06.io.spdk:cnode20693 00:15:21.729 [2024-12-06 14:29:28.518489] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20693: invalid serial number 'OeSa>Hppc}!0L7w@`3Dr' 00:15:21.729 14:29:28 -- target/invalid.sh@54 -- # out='2024/12/06 14:29:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20693 serial_number:OeSa>Hppc}!0L7w@`3Dr], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN OeSa>Hppc}!0L7w@`3Dr 00:15:21.729 request: 00:15:21.729 { 00:15:21.729 "method": "nvmf_create_subsystem", 00:15:21.729 "params": { 00:15:21.729 "nqn": "nqn.2016-06.io.spdk:cnode20693", 00:15:21.729 "serial_number": "OeSa>Hp\u007fpc}!0L7w@`3Dr" 00:15:21.729 } 00:15:21.729 } 00:15:21.729 Got JSON-RPC error response 00:15:21.729 GoRPCClient: error on JSON-RPC call' 00:15:21.729 14:29:28 -- target/invalid.sh@55 -- # [[ 2024/12/06 14:29:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20693 serial_number:OeSa>Hppc}!0L7w@`3Dr], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN OeSa>Hppc}!0L7w@`3Dr 00:15:21.729 request: 00:15:21.729 { 00:15:21.729 "method": "nvmf_create_subsystem", 00:15:21.729 "params": { 00:15:21.729 "nqn": "nqn.2016-06.io.spdk:cnode20693", 00:15:21.729 "serial_number": "OeSa>Hp\u007fpc}!0L7w@`3Dr" 00:15:21.729 } 00:15:21.729 } 00:15:21.729 Got JSON-RPC error response 00:15:21.729 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:21.729 14:29:28 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:21.729 14:29:28 -- target/invalid.sh@19 -- # local length=41 ll 00:15:21.729 14:29:28 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.729 14:29:28 -- target/invalid.sh@21 -- # local chars 00:15:21.729 14:29:28 -- target/invalid.sh@22 -- # local string 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 50 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=2 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 110 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=n 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 125 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+='}' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 123 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+='{' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 117 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=u 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 74 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=J 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 70 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=F 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 93 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=']' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 81 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=Q 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 124 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+='|' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 70 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=F 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 127 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 59 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=';' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 72 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=H 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 46 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=. 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 96 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+='`' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 94 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+='^' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 64 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=@ 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 109 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=m 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 53 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=5 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 85 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=U 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 103 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=g 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 34 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+='"' 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 105 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=i 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 117 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=u 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # printf %x 81 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:21.729 14:29:28 -- target/invalid.sh@25 -- # string+=Q 00:15:21.729 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.730 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.730 14:29:28 -- target/invalid.sh@25 -- # printf %x 37 00:15:21.730 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:21.730 14:29:28 -- target/invalid.sh@25 -- # string+=% 00:15:21.730 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.730 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.730 14:29:28 -- target/invalid.sh@25 -- # printf %x 84 00:15:21.730 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:21.730 14:29:28 -- target/invalid.sh@25 -- # string+=T 00:15:21.730 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.730 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 123 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+='{' 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 75 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=K 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 111 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=o 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 78 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=N 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 40 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+='(' 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 47 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=/ 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 109 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=m 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 58 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=: 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 38 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+='&' 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 53 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=5 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 75 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=K 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 117 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+=u 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # printf %x 35 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:21.989 14:29:28 -- target/invalid.sh@25 -- # string+='#' 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.989 14:29:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.989 14:29:28 -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:15:21.989 14:29:28 -- target/invalid.sh@31 -- # echo '2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku#' 00:15:21.989 14:29:28 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku#' nqn.2016-06.io.spdk:cnode9084 00:15:22.248 [2024-12-06 14:29:29.047179] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9084: invalid model number '2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku#' 00:15:22.248 14:29:29 -- target/invalid.sh@58 -- # out='2024/12/06 14:29:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku# nqn:nqn.2016-06.io.spdk:cnode9084], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku# 00:15:22.248 request: 00:15:22.248 { 00:15:22.248 "method": "nvmf_create_subsystem", 00:15:22.248 "params": { 00:15:22.248 "nqn": "nqn.2016-06.io.spdk:cnode9084", 00:15:22.248 "model_number": "2n}{uJF]Q|F\u007f;H.`^@m5Ug\"iuQ%T{KoN(/m:&5Ku#" 00:15:22.248 } 00:15:22.248 } 00:15:22.248 Got JSON-RPC error response 00:15:22.248 GoRPCClient: error on JSON-RPC call' 00:15:22.248 14:29:29 -- target/invalid.sh@59 -- # [[ 2024/12/06 14:29:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku# nqn:nqn.2016-06.io.spdk:cnode9084], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 2n}{uJF]Q|F;H.`^@m5Ug"iuQ%T{KoN(/m:&5Ku# 00:15:22.248 request: 00:15:22.248 { 00:15:22.248 "method": "nvmf_create_subsystem", 00:15:22.248 "params": { 00:15:22.248 "nqn": "nqn.2016-06.io.spdk:cnode9084", 00:15:22.248 "model_number": "2n}{uJF]Q|F\u007f;H.`^@m5Ug\"iuQ%T{KoN(/m:&5Ku#" 00:15:22.248 } 00:15:22.248 } 00:15:22.248 Got JSON-RPC error response 00:15:22.248 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:22.248 14:29:29 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:22.506 [2024-12-06 14:29:29.335601] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.506 14:29:29 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:23.071 14:29:29 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:23.071 14:29:29 -- target/invalid.sh@67 -- # echo '' 00:15:23.071 14:29:29 -- target/invalid.sh@67 -- # head -n 1 00:15:23.071 14:29:29 -- target/invalid.sh@67 -- # IP= 00:15:23.071 14:29:29 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:23.328 [2024-12-06 14:29:30.043456] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:23.328 14:29:30 -- target/invalid.sh@69 -- # out='2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:15:23.328 request: 00:15:23.328 { 00:15:23.328 "method": "nvmf_subsystem_remove_listener", 00:15:23.328 "params": { 00:15:23.328 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:23.328 "listen_address": { 00:15:23.328 "trtype": "tcp", 00:15:23.328 "traddr": "", 00:15:23.328 "trsvcid": "4421" 00:15:23.328 } 00:15:23.328 } 00:15:23.328 } 00:15:23.328 Got JSON-RPC error response 00:15:23.328 GoRPCClient: error on JSON-RPC call' 00:15:23.328 14:29:30 -- target/invalid.sh@70 -- # [[ 2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:15:23.328 request: 00:15:23.328 { 00:15:23.328 "method": "nvmf_subsystem_remove_listener", 00:15:23.328 "params": { 00:15:23.328 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:23.328 "listen_address": { 00:15:23.328 "trtype": "tcp", 00:15:23.328 "traddr": "", 00:15:23.328 "trsvcid": "4421" 00:15:23.328 } 00:15:23.328 } 00:15:23.328 } 00:15:23.328 Got JSON-RPC error response 00:15:23.328 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:23.328 14:29:30 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1346 -i 0 00:15:23.588 [2024-12-06 14:29:30.299674] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1346: invalid cntlid range [0-65519] 00:15:23.588 14:29:30 -- target/invalid.sh@73 -- # out='2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode1346], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:15:23.588 request: 00:15:23.588 { 00:15:23.588 "method": "nvmf_create_subsystem", 00:15:23.588 "params": { 00:15:23.588 "nqn": "nqn.2016-06.io.spdk:cnode1346", 00:15:23.588 "min_cntlid": 0 00:15:23.588 } 00:15:23.588 } 00:15:23.588 Got JSON-RPC error response 00:15:23.588 GoRPCClient: error on JSON-RPC call' 00:15:23.588 14:29:30 -- target/invalid.sh@74 -- # [[ 2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode1346], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:15:23.588 request: 00:15:23.588 { 00:15:23.588 "method": "nvmf_create_subsystem", 00:15:23.588 "params": { 00:15:23.588 "nqn": "nqn.2016-06.io.spdk:cnode1346", 00:15:23.588 "min_cntlid": 0 00:15:23.588 } 00:15:23.588 } 00:15:23.588 Got JSON-RPC error response 00:15:23.588 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.588 14:29:30 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4698 -i 65520 00:15:23.589 [2024-12-06 14:29:30.556024] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4698: invalid cntlid range [65520-65519] 00:15:23.846 14:29:30 -- target/invalid.sh@75 -- # out='2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4698], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:15:23.846 request: 00:15:23.846 { 00:15:23.846 "method": "nvmf_create_subsystem", 00:15:23.846 "params": { 00:15:23.846 "nqn": "nqn.2016-06.io.spdk:cnode4698", 00:15:23.846 "min_cntlid": 65520 00:15:23.846 } 00:15:23.846 } 00:15:23.846 Got JSON-RPC error response 00:15:23.846 GoRPCClient: error on JSON-RPC call' 00:15:23.846 14:29:30 -- target/invalid.sh@76 -- # [[ 2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4698], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:15:23.846 request: 00:15:23.846 { 00:15:23.846 "method": "nvmf_create_subsystem", 00:15:23.846 "params": { 00:15:23.846 "nqn": "nqn.2016-06.io.spdk:cnode4698", 00:15:23.846 "min_cntlid": 65520 00:15:23.846 } 00:15:23.846 } 00:15:23.846 Got JSON-RPC error response 00:15:23.846 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.846 14:29:30 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30651 -I 0 00:15:24.104 [2024-12-06 14:29:30.848458] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30651: invalid cntlid range [1-0] 00:15:24.104 14:29:30 -- target/invalid.sh@77 -- # out='2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30651], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:15:24.104 request: 00:15:24.104 { 00:15:24.104 "method": "nvmf_create_subsystem", 00:15:24.104 "params": { 00:15:24.104 "nqn": "nqn.2016-06.io.spdk:cnode30651", 00:15:24.104 "max_cntlid": 0 00:15:24.104 } 00:15:24.104 } 00:15:24.104 Got JSON-RPC error response 00:15:24.104 GoRPCClient: error on JSON-RPC call' 00:15:24.104 14:29:30 -- target/invalid.sh@78 -- # [[ 2024/12/06 14:29:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30651], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:15:24.104 request: 00:15:24.104 { 00:15:24.104 "method": "nvmf_create_subsystem", 00:15:24.104 "params": { 00:15:24.104 "nqn": "nqn.2016-06.io.spdk:cnode30651", 00:15:24.104 "max_cntlid": 0 00:15:24.104 } 00:15:24.104 } 00:15:24.104 Got JSON-RPC error response 00:15:24.104 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:24.104 14:29:30 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19496 -I 65520 00:15:24.361 [2024-12-06 14:29:31.104814] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19496: invalid cntlid range [1-65520] 00:15:24.361 14:29:31 -- target/invalid.sh@79 -- # out='2024/12/06 14:29:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode19496], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:15:24.361 request: 00:15:24.361 { 00:15:24.361 "method": "nvmf_create_subsystem", 00:15:24.361 "params": { 00:15:24.361 "nqn": "nqn.2016-06.io.spdk:cnode19496", 00:15:24.361 "max_cntlid": 65520 00:15:24.361 } 00:15:24.361 } 00:15:24.361 Got JSON-RPC error response 00:15:24.361 GoRPCClient: error on JSON-RPC call' 00:15:24.361 14:29:31 -- target/invalid.sh@80 -- # [[ 2024/12/06 14:29:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode19496], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:15:24.361 request: 00:15:24.361 { 00:15:24.361 "method": "nvmf_create_subsystem", 00:15:24.361 "params": { 00:15:24.361 "nqn": "nqn.2016-06.io.spdk:cnode19496", 00:15:24.361 "max_cntlid": 65520 00:15:24.361 } 00:15:24.361 } 00:15:24.361 Got JSON-RPC error response 00:15:24.361 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:24.361 14:29:31 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21632 -i 6 -I 5 00:15:24.619 [2024-12-06 14:29:31.353190] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21632: invalid cntlid range [6-5] 00:15:24.619 14:29:31 -- target/invalid.sh@83 -- # out='2024/12/06 14:29:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode21632], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:15:24.619 request: 00:15:24.619 { 00:15:24.619 "method": "nvmf_create_subsystem", 00:15:24.619 "params": { 00:15:24.619 "nqn": "nqn.2016-06.io.spdk:cnode21632", 00:15:24.619 "min_cntlid": 6, 00:15:24.619 "max_cntlid": 5 00:15:24.619 } 00:15:24.619 } 00:15:24.619 Got JSON-RPC error response 00:15:24.619 GoRPCClient: error on JSON-RPC call' 00:15:24.619 14:29:31 -- target/invalid.sh@84 -- # [[ 2024/12/06 14:29:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode21632], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:15:24.619 request: 00:15:24.619 { 00:15:24.619 "method": "nvmf_create_subsystem", 00:15:24.619 "params": { 00:15:24.619 "nqn": "nqn.2016-06.io.spdk:cnode21632", 00:15:24.619 "min_cntlid": 6, 00:15:24.619 "max_cntlid": 5 00:15:24.619 } 00:15:24.619 } 00:15:24.619 Got JSON-RPC error response 00:15:24.619 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:24.619 14:29:31 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:24.619 14:29:31 -- target/invalid.sh@87 -- # out='request: 00:15:24.619 { 00:15:24.619 "name": "foobar", 00:15:24.619 "method": "nvmf_delete_target", 00:15:24.619 "req_id": 1 00:15:24.619 } 00:15:24.619 Got JSON-RPC error response 00:15:24.619 response: 00:15:24.619 { 00:15:24.619 "code": -32602, 00:15:24.619 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:24.619 }' 00:15:24.619 14:29:31 -- target/invalid.sh@88 -- # [[ request: 00:15:24.619 { 00:15:24.619 "name": "foobar", 00:15:24.619 "method": "nvmf_delete_target", 00:15:24.619 "req_id": 1 00:15:24.619 } 00:15:24.619 Got JSON-RPC error response 00:15:24.619 response: 00:15:24.619 { 00:15:24.619 "code": -32602, 00:15:24.619 "message": "The specified target doesn't exist, cannot delete it." 00:15:24.619 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:24.619 14:29:31 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:24.619 14:29:31 -- target/invalid.sh@91 -- # nvmftestfini 00:15:24.619 14:29:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:24.619 14:29:31 -- nvmf/common.sh@116 -- # sync 00:15:24.619 14:29:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:24.619 14:29:31 -- nvmf/common.sh@119 -- # set +e 00:15:24.619 14:29:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:24.619 14:29:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:24.619 rmmod nvme_tcp 00:15:24.619 rmmod nvme_fabrics 00:15:24.879 rmmod nvme_keyring 00:15:24.879 14:29:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:24.879 14:29:31 -- nvmf/common.sh@123 -- # set -e 00:15:24.879 14:29:31 -- nvmf/common.sh@124 -- # return 0 00:15:24.879 14:29:31 -- nvmf/common.sh@477 -- # '[' -n 66998 ']' 00:15:24.879 14:29:31 -- nvmf/common.sh@478 -- # killprocess 66998 00:15:24.879 14:29:31 -- common/autotest_common.sh@936 -- # '[' -z 66998 ']' 00:15:24.879 14:29:31 -- common/autotest_common.sh@940 -- # kill -0 66998 00:15:24.879 14:29:31 -- common/autotest_common.sh@941 -- # uname 00:15:24.879 14:29:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.879 14:29:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66998 00:15:24.879 killing process with pid 66998 00:15:24.879 14:29:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:24.879 14:29:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:24.879 14:29:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66998' 00:15:24.879 14:29:31 -- common/autotest_common.sh@955 -- # kill 66998 00:15:24.879 14:29:31 -- common/autotest_common.sh@960 -- # wait 66998 00:15:25.136 14:29:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.136 14:29:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.136 14:29:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.136 14:29:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.136 14:29:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.136 14:29:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.136 14:29:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.136 14:29:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.136 14:29:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.136 ************************************ 00:15:25.136 END TEST nvmf_invalid 00:15:25.136 ************************************ 00:15:25.136 00:15:25.136 real 0m6.515s 00:15:25.136 user 0m25.837s 00:15:25.136 sys 0m1.372s 00:15:25.136 14:29:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:25.136 14:29:31 -- common/autotest_common.sh@10 -- # set +x 00:15:25.136 14:29:32 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:25.136 14:29:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.136 14:29:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.136 14:29:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.136 ************************************ 00:15:25.136 START TEST nvmf_abort 00:15:25.136 ************************************ 00:15:25.136 14:29:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:25.394 * Looking for test storage... 00:15:25.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.394 14:29:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:25.394 14:29:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:25.394 14:29:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:25.394 14:29:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:25.394 14:29:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:25.394 14:29:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:25.394 14:29:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:25.394 14:29:32 -- scripts/common.sh@335 -- # IFS=.-: 00:15:25.394 14:29:32 -- scripts/common.sh@335 -- # read -ra ver1 00:15:25.394 14:29:32 -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.394 14:29:32 -- scripts/common.sh@336 -- # read -ra ver2 00:15:25.394 14:29:32 -- scripts/common.sh@337 -- # local 'op=<' 00:15:25.394 14:29:32 -- scripts/common.sh@339 -- # ver1_l=2 00:15:25.394 14:29:32 -- scripts/common.sh@340 -- # ver2_l=1 00:15:25.394 14:29:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:25.394 14:29:32 -- scripts/common.sh@343 -- # case "$op" in 00:15:25.394 14:29:32 -- scripts/common.sh@344 -- # : 1 00:15:25.394 14:29:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:25.394 14:29:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.394 14:29:32 -- scripts/common.sh@364 -- # decimal 1 00:15:25.394 14:29:32 -- scripts/common.sh@352 -- # local d=1 00:15:25.394 14:29:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.394 14:29:32 -- scripts/common.sh@354 -- # echo 1 00:15:25.394 14:29:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:25.394 14:29:32 -- scripts/common.sh@365 -- # decimal 2 00:15:25.394 14:29:32 -- scripts/common.sh@352 -- # local d=2 00:15:25.394 14:29:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.394 14:29:32 -- scripts/common.sh@354 -- # echo 2 00:15:25.394 14:29:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:25.394 14:29:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:25.394 14:29:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:25.394 14:29:32 -- scripts/common.sh@367 -- # return 0 00:15:25.394 14:29:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.394 14:29:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:25.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.394 --rc genhtml_branch_coverage=1 00:15:25.394 --rc genhtml_function_coverage=1 00:15:25.394 --rc genhtml_legend=1 00:15:25.394 --rc geninfo_all_blocks=1 00:15:25.394 --rc geninfo_unexecuted_blocks=1 00:15:25.394 00:15:25.394 ' 00:15:25.394 14:29:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:25.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.394 --rc genhtml_branch_coverage=1 00:15:25.394 --rc genhtml_function_coverage=1 00:15:25.394 --rc genhtml_legend=1 00:15:25.394 --rc geninfo_all_blocks=1 00:15:25.394 --rc geninfo_unexecuted_blocks=1 00:15:25.394 00:15:25.394 ' 00:15:25.394 14:29:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:25.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.394 --rc genhtml_branch_coverage=1 00:15:25.394 --rc genhtml_function_coverage=1 00:15:25.394 --rc genhtml_legend=1 00:15:25.394 --rc geninfo_all_blocks=1 00:15:25.394 --rc geninfo_unexecuted_blocks=1 00:15:25.394 00:15:25.394 ' 00:15:25.394 14:29:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:25.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.394 --rc genhtml_branch_coverage=1 00:15:25.394 --rc genhtml_function_coverage=1 00:15:25.394 --rc genhtml_legend=1 00:15:25.394 --rc geninfo_all_blocks=1 00:15:25.394 --rc geninfo_unexecuted_blocks=1 00:15:25.394 00:15:25.394 ' 00:15:25.394 14:29:32 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.394 14:29:32 -- nvmf/common.sh@7 -- # uname -s 00:15:25.394 14:29:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.394 14:29:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.394 14:29:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.394 14:29:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.394 14:29:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.394 14:29:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.394 14:29:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.394 14:29:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.394 14:29:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.394 14:29:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.394 14:29:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:25.394 14:29:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:25.395 14:29:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.395 14:29:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.395 14:29:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.395 14:29:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.395 14:29:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.395 14:29:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.395 14:29:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.395 14:29:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.395 14:29:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.395 14:29:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.395 14:29:32 -- paths/export.sh@5 -- # export PATH 00:15:25.395 14:29:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.395 14:29:32 -- nvmf/common.sh@46 -- # : 0 00:15:25.395 14:29:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.395 14:29:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.395 14:29:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.395 14:29:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.395 14:29:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.395 14:29:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.395 14:29:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.395 14:29:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.395 14:29:32 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.395 14:29:32 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:25.395 14:29:32 -- target/abort.sh@14 -- # nvmftestinit 00:15:25.395 14:29:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.395 14:29:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.395 14:29:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.395 14:29:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.395 14:29:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.395 14:29:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.395 14:29:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.395 14:29:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.395 14:29:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.395 14:29:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.395 14:29:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.395 14:29:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.395 14:29:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.395 14:29:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.395 14:29:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.395 14:29:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.395 14:29:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.395 14:29:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.395 14:29:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.395 14:29:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.395 14:29:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.395 14:29:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.395 14:29:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.395 14:29:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.395 14:29:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.395 14:29:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.395 14:29:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.395 14:29:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.395 Cannot find device "nvmf_tgt_br" 00:15:25.395 14:29:32 -- nvmf/common.sh@154 -- # true 00:15:25.395 14:29:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.395 Cannot find device "nvmf_tgt_br2" 00:15:25.395 14:29:32 -- nvmf/common.sh@155 -- # true 00:15:25.395 14:29:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.395 14:29:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.395 Cannot find device "nvmf_tgt_br" 00:15:25.395 14:29:32 -- nvmf/common.sh@157 -- # true 00:15:25.395 14:29:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.395 Cannot find device "nvmf_tgt_br2" 00:15:25.395 14:29:32 -- nvmf/common.sh@158 -- # true 00:15:25.395 14:29:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.653 14:29:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.653 14:29:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.653 14:29:32 -- nvmf/common.sh@161 -- # true 00:15:25.653 14:29:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.653 14:29:32 -- nvmf/common.sh@162 -- # true 00:15:25.653 14:29:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.653 14:29:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.653 14:29:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.653 14:29:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.653 14:29:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.653 14:29:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.653 14:29:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.653 14:29:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.653 14:29:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.653 14:29:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:25.653 14:29:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:25.653 14:29:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:25.653 14:29:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:25.653 14:29:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.653 14:29:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.653 14:29:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.653 14:29:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.653 14:29:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.653 14:29:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.653 14:29:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.653 14:29:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.653 14:29:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.653 14:29:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.653 14:29:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:25.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:15:25.653 00:15:25.653 --- 10.0.0.2 ping statistics --- 00:15:25.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.653 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:25.653 14:29:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:25.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:15:25.653 00:15:25.653 --- 10.0.0.3 ping statistics --- 00:15:25.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.653 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:25.653 14:29:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:25.653 00:15:25.653 --- 10.0.0.1 ping statistics --- 00:15:25.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.653 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:25.653 14:29:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.653 14:29:32 -- nvmf/common.sh@421 -- # return 0 00:15:25.653 14:29:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.653 14:29:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.653 14:29:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.653 14:29:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.653 14:29:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.653 14:29:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.653 14:29:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.911 14:29:32 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:25.911 14:29:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.911 14:29:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.911 14:29:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.911 14:29:32 -- nvmf/common.sh@469 -- # nvmfpid=67515 00:15:25.911 14:29:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:25.911 14:29:32 -- nvmf/common.sh@470 -- # waitforlisten 67515 00:15:25.911 14:29:32 -- common/autotest_common.sh@829 -- # '[' -z 67515 ']' 00:15:25.911 14:29:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.911 14:29:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.911 14:29:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.911 14:29:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.911 14:29:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.911 [2024-12-06 14:29:32.675869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:25.911 [2024-12-06 14:29:32.676107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.911 [2024-12-06 14:29:32.812402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.169 [2024-12-06 14:29:32.941845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.169 [2024-12-06 14:29:32.942366] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.170 [2024-12-06 14:29:32.942554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.170 [2024-12-06 14:29:32.942767] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.170 [2024-12-06 14:29:32.943185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.170 [2024-12-06 14:29:32.943316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.170 [2024-12-06 14:29:32.943325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.105 14:29:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.105 14:29:33 -- common/autotest_common.sh@862 -- # return 0 00:15:27.105 14:29:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.105 14:29:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 14:29:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.105 14:29:33 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 [2024-12-06 14:29:33.754739] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 Malloc0 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 Delay0 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 [2024-12-06 14:29:33.824327] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.105 14:29:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 14:29:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 14:29:33 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:27.105 [2024-12-06 14:29:34.029830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:29.639 Initializing NVMe Controllers 00:15:29.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:29.639 controller IO queue size 128 less than required 00:15:29.639 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:29.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:29.639 Initialization complete. Launching workers. 00:15:29.639 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34124 00:15:29.639 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34189, failed to submit 62 00:15:29.639 success 34124, unsuccess 65, failed 0 00:15:29.639 14:29:36 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:29.639 14:29:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.639 14:29:36 -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 14:29:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.639 14:29:36 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:29.639 14:29:36 -- target/abort.sh@38 -- # nvmftestfini 00:15:29.639 14:29:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:29.639 14:29:36 -- nvmf/common.sh@116 -- # sync 00:15:29.639 14:29:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:29.639 14:29:36 -- nvmf/common.sh@119 -- # set +e 00:15:29.639 14:29:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.639 14:29:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:29.639 rmmod nvme_tcp 00:15:29.639 rmmod nvme_fabrics 00:15:29.639 rmmod nvme_keyring 00:15:29.639 14:29:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.639 14:29:36 -- nvmf/common.sh@123 -- # set -e 00:15:29.639 14:29:36 -- nvmf/common.sh@124 -- # return 0 00:15:29.639 14:29:36 -- nvmf/common.sh@477 -- # '[' -n 67515 ']' 00:15:29.639 14:29:36 -- nvmf/common.sh@478 -- # killprocess 67515 00:15:29.639 14:29:36 -- common/autotest_common.sh@936 -- # '[' -z 67515 ']' 00:15:29.639 14:29:36 -- common/autotest_common.sh@940 -- # kill -0 67515 00:15:29.639 14:29:36 -- common/autotest_common.sh@941 -- # uname 00:15:29.639 14:29:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.639 14:29:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67515 00:15:29.639 killing process with pid 67515 00:15:29.639 14:29:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:29.639 14:29:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:29.639 14:29:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67515' 00:15:29.639 14:29:36 -- common/autotest_common.sh@955 -- # kill 67515 00:15:29.639 14:29:36 -- common/autotest_common.sh@960 -- # wait 67515 00:15:29.639 14:29:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:29.639 14:29:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:29.639 14:29:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:29.639 14:29:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.639 14:29:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:29.639 14:29:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.639 14:29:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.639 14:29:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.639 14:29:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:29.639 00:15:29.639 real 0m4.456s 00:15:29.639 user 0m12.625s 00:15:29.639 sys 0m1.005s 00:15:29.639 14:29:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:29.639 14:29:36 -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 ************************************ 00:15:29.639 END TEST nvmf_abort 00:15:29.639 ************************************ 00:15:29.639 14:29:36 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:29.639 14:29:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:29.639 14:29:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.639 14:29:36 -- common/autotest_common.sh@10 -- # set +x 00:15:29.639 ************************************ 00:15:29.639 START TEST nvmf_ns_hotplug_stress 00:15:29.639 ************************************ 00:15:29.639 14:29:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:29.899 * Looking for test storage... 00:15:29.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:29.899 14:29:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:29.899 14:29:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:29.899 14:29:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:29.899 14:29:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:29.899 14:29:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:29.899 14:29:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:29.899 14:29:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:29.899 14:29:36 -- scripts/common.sh@335 -- # IFS=.-: 00:15:29.899 14:29:36 -- scripts/common.sh@335 -- # read -ra ver1 00:15:29.899 14:29:36 -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.899 14:29:36 -- scripts/common.sh@336 -- # read -ra ver2 00:15:29.899 14:29:36 -- scripts/common.sh@337 -- # local 'op=<' 00:15:29.899 14:29:36 -- scripts/common.sh@339 -- # ver1_l=2 00:15:29.899 14:29:36 -- scripts/common.sh@340 -- # ver2_l=1 00:15:29.899 14:29:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:29.899 14:29:36 -- scripts/common.sh@343 -- # case "$op" in 00:15:29.899 14:29:36 -- scripts/common.sh@344 -- # : 1 00:15:29.899 14:29:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:29.899 14:29:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.899 14:29:36 -- scripts/common.sh@364 -- # decimal 1 00:15:29.899 14:29:36 -- scripts/common.sh@352 -- # local d=1 00:15:29.899 14:29:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.899 14:29:36 -- scripts/common.sh@354 -- # echo 1 00:15:29.899 14:29:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:29.899 14:29:36 -- scripts/common.sh@365 -- # decimal 2 00:15:29.899 14:29:36 -- scripts/common.sh@352 -- # local d=2 00:15:29.899 14:29:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.899 14:29:36 -- scripts/common.sh@354 -- # echo 2 00:15:29.899 14:29:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:29.899 14:29:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:29.899 14:29:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:29.899 14:29:36 -- scripts/common.sh@367 -- # return 0 00:15:29.899 14:29:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.899 14:29:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:29.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.899 --rc genhtml_branch_coverage=1 00:15:29.899 --rc genhtml_function_coverage=1 00:15:29.899 --rc genhtml_legend=1 00:15:29.899 --rc geninfo_all_blocks=1 00:15:29.899 --rc geninfo_unexecuted_blocks=1 00:15:29.899 00:15:29.899 ' 00:15:29.899 14:29:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:29.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.899 --rc genhtml_branch_coverage=1 00:15:29.899 --rc genhtml_function_coverage=1 00:15:29.899 --rc genhtml_legend=1 00:15:29.899 --rc geninfo_all_blocks=1 00:15:29.899 --rc geninfo_unexecuted_blocks=1 00:15:29.899 00:15:29.899 ' 00:15:29.899 14:29:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:29.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.899 --rc genhtml_branch_coverage=1 00:15:29.899 --rc genhtml_function_coverage=1 00:15:29.899 --rc genhtml_legend=1 00:15:29.899 --rc geninfo_all_blocks=1 00:15:29.899 --rc geninfo_unexecuted_blocks=1 00:15:29.899 00:15:29.899 ' 00:15:29.899 14:29:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:29.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.899 --rc genhtml_branch_coverage=1 00:15:29.899 --rc genhtml_function_coverage=1 00:15:29.899 --rc genhtml_legend=1 00:15:29.899 --rc geninfo_all_blocks=1 00:15:29.899 --rc geninfo_unexecuted_blocks=1 00:15:29.899 00:15:29.899 ' 00:15:29.899 14:29:36 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.899 14:29:36 -- nvmf/common.sh@7 -- # uname -s 00:15:29.899 14:29:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.899 14:29:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.899 14:29:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.899 14:29:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.899 14:29:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.899 14:29:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.899 14:29:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.899 14:29:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.899 14:29:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.899 14:29:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.900 14:29:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:29.900 14:29:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:15:29.900 14:29:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.900 14:29:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.900 14:29:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.900 14:29:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.900 14:29:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.900 14:29:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.900 14:29:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.900 14:29:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.900 14:29:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.900 14:29:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.900 14:29:36 -- paths/export.sh@5 -- # export PATH 00:15:29.900 14:29:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.900 14:29:36 -- nvmf/common.sh@46 -- # : 0 00:15:29.900 14:29:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:29.900 14:29:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:29.900 14:29:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:29.900 14:29:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.900 14:29:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.900 14:29:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:29.900 14:29:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:29.900 14:29:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:29.900 14:29:36 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.900 14:29:36 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:29.900 14:29:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:29.900 14:29:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.900 14:29:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:29.900 14:29:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:29.900 14:29:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:29.900 14:29:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.900 14:29:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.900 14:29:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.900 14:29:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:29.900 14:29:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:29.900 14:29:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:29.900 14:29:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:29.900 14:29:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:29.900 14:29:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:29.900 14:29:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.900 14:29:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.900 14:29:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.900 14:29:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:29.900 14:29:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.900 14:29:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.900 14:29:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.900 14:29:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.900 14:29:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.900 14:29:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.900 14:29:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.900 14:29:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.900 14:29:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:29.900 14:29:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:29.900 Cannot find device "nvmf_tgt_br" 00:15:29.900 14:29:36 -- nvmf/common.sh@154 -- # true 00:15:29.900 14:29:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.900 Cannot find device "nvmf_tgt_br2" 00:15:29.900 14:29:36 -- nvmf/common.sh@155 -- # true 00:15:29.900 14:29:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:29.900 14:29:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:29.900 Cannot find device "nvmf_tgt_br" 00:15:29.900 14:29:36 -- nvmf/common.sh@157 -- # true 00:15:29.900 14:29:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:29.900 Cannot find device "nvmf_tgt_br2" 00:15:29.900 14:29:36 -- nvmf/common.sh@158 -- # true 00:15:29.900 14:29:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:30.159 14:29:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:30.159 14:29:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.159 14:29:36 -- nvmf/common.sh@161 -- # true 00:15:30.159 14:29:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.159 14:29:36 -- nvmf/common.sh@162 -- # true 00:15:30.159 14:29:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.159 14:29:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.159 14:29:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.159 14:29:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.159 14:29:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.159 14:29:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.159 14:29:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.159 14:29:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.159 14:29:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.159 14:29:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:30.159 14:29:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:30.159 14:29:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:30.159 14:29:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:30.159 14:29:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.159 14:29:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.159 14:29:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.159 14:29:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:30.159 14:29:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:30.159 14:29:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.159 14:29:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.159 14:29:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.159 14:29:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.159 14:29:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.159 14:29:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:30.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:15:30.159 00:15:30.159 --- 10.0.0.2 ping statistics --- 00:15:30.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.159 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:30.159 14:29:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:30.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:30.159 00:15:30.159 --- 10.0.0.3 ping statistics --- 00:15:30.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.159 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:30.159 14:29:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:30.159 00:15:30.159 --- 10.0.0.1 ping statistics --- 00:15:30.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.159 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:30.159 14:29:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.159 14:29:37 -- nvmf/common.sh@421 -- # return 0 00:15:30.159 14:29:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:30.159 14:29:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.159 14:29:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:30.159 14:29:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:30.159 14:29:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.159 14:29:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:30.159 14:29:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:30.159 14:29:37 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:30.159 14:29:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:30.159 14:29:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.159 14:29:37 -- common/autotest_common.sh@10 -- # set +x 00:15:30.418 14:29:37 -- nvmf/common.sh@469 -- # nvmfpid=67786 00:15:30.418 14:29:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:30.418 14:29:37 -- nvmf/common.sh@470 -- # waitforlisten 67786 00:15:30.418 14:29:37 -- common/autotest_common.sh@829 -- # '[' -z 67786 ']' 00:15:30.418 14:29:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.418 14:29:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.418 14:29:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.418 14:29:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.418 14:29:37 -- common/autotest_common.sh@10 -- # set +x 00:15:30.418 [2024-12-06 14:29:37.187942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:30.418 [2024-12-06 14:29:37.188058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.418 [2024-12-06 14:29:37.328204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.677 [2024-12-06 14:29:37.448324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:30.677 [2024-12-06 14:29:37.448512] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.677 [2024-12-06 14:29:37.448528] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.677 [2024-12-06 14:29:37.448537] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.677 [2024-12-06 14:29:37.449251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.677 [2024-12-06 14:29:37.449365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.677 [2024-12-06 14:29:37.449371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.243 14:29:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.243 14:29:38 -- common/autotest_common.sh@862 -- # return 0 00:15:31.243 14:29:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:31.243 14:29:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.243 14:29:38 -- common/autotest_common.sh@10 -- # set +x 00:15:31.243 14:29:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.243 14:29:38 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:31.243 14:29:38 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.810 [2024-12-06 14:29:38.488673] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.810 14:29:38 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:31.810 14:29:38 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.379 [2024-12-06 14:29:39.059209] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.379 14:29:39 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.687 14:29:39 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:32.945 Malloc0 00:15:32.945 14:29:39 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:33.203 Delay0 00:15:33.203 14:29:39 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.462 14:29:40 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:33.720 NULL1 00:15:33.720 14:29:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:33.978 14:29:40 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67929 00:15:33.978 14:29:40 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:33.978 14:29:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:33.978 14:29:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.352 Read completed with error (sct=0, sc=11) 00:15:35.352 14:29:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.609 14:29:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:35.609 14:29:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:35.868 true 00:15:35.868 14:29:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:35.868 14:29:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.802 14:29:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.802 14:29:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:36.802 14:29:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:37.060 true 00:15:37.060 14:29:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:37.060 14:29:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.993 14:29:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.251 14:29:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:38.251 14:29:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:38.509 true 00:15:38.509 14:29:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:38.509 14:29:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.766 14:29:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.023 14:29:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:39.023 14:29:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:39.281 true 00:15:39.281 14:29:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:39.281 14:29:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.537 14:29:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.794 14:29:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:39.794 14:29:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:40.051 true 00:15:40.051 14:29:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:40.051 14:29:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.985 14:29:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.244 14:29:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:41.244 14:29:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:41.501 true 00:15:41.501 14:29:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:41.501 14:29:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.758 14:29:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.016 14:29:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:42.016 14:29:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:42.274 true 00:15:42.274 14:29:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:42.274 14:29:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.532 14:29:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.789 14:29:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:42.789 14:29:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:43.048 true 00:15:43.048 14:29:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:43.048 14:29:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.981 14:29:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.239 14:29:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:44.239 14:29:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:44.497 true 00:15:44.497 14:29:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:44.497 14:29:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.756 14:29:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.015 14:29:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:45.015 14:29:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:45.273 true 00:15:45.273 14:29:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:45.273 14:29:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.531 14:29:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.789 14:29:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:45.789 14:29:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:46.047 true 00:15:46.047 14:29:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:46.047 14:29:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.980 14:29:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.236 14:29:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:47.236 14:29:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:47.500 true 00:15:47.500 14:29:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:47.500 14:29:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.758 14:29:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.758 14:29:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:47.758 14:29:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:48.325 true 00:15:48.325 14:29:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:48.325 14:29:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.905 14:29:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.162 14:29:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:49.162 14:29:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:49.420 true 00:15:49.420 14:29:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:49.420 14:29:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.677 14:29:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.935 14:29:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:49.935 14:29:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:50.193 true 00:15:50.193 14:29:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:50.193 14:29:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.450 14:29:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.709 14:29:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:50.709 14:29:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:51.274 true 00:15:51.274 14:29:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:51.274 14:29:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.839 14:29:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.404 14:29:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:52.404 14:29:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:52.662 true 00:15:52.662 14:29:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:52.662 14:29:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.040 14:30:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.298 14:30:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:54.298 14:30:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:54.555 true 00:15:54.555 14:30:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:54.555 14:30:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.488 14:30:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.746 14:30:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:55.746 14:30:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:56.004 true 00:15:56.004 14:30:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:56.004 14:30:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.571 14:30:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.829 14:30:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:56.829 14:30:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:57.087 true 00:15:57.087 14:30:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:57.087 14:30:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.021 14:30:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.277 14:30:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:58.277 14:30:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:58.534 true 00:15:58.534 14:30:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:58.534 14:30:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.790 14:30:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.375 14:30:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:59.375 14:30:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:59.375 true 00:15:59.375 14:30:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:15:59.375 14:30:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.632 14:30:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.890 14:30:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:59.890 14:30:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:00.148 true 00:16:00.148 14:30:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:16:00.148 14:30:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.406 14:30:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.665 14:30:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:00.665 14:30:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:00.923 true 00:16:01.181 14:30:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:16:01.181 14:30:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.117 14:30:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.399 14:30:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:02.399 14:30:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:02.399 true 00:16:02.658 14:30:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:16:02.658 14:30:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.917 14:30:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.176 14:30:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:03.176 14:30:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:03.435 true 00:16:03.435 14:30:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:16:03.435 14:30:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.695 14:30:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.970 14:30:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:03.970 14:30:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:04.236 Initializing NVMe Controllers 00:16:04.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:04.236 Controller IO queue size 128, less than required. 00:16:04.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:04.236 Controller IO queue size 128, less than required. 00:16:04.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:04.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:04.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:04.236 Initialization complete. Launching workers. 00:16:04.236 ======================================================== 00:16:04.236 Latency(us) 00:16:04.236 Device Information : IOPS MiB/s Average min max 00:16:04.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1294.21 0.63 50312.96 3121.93 1105108.79 00:16:04.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10819.17 5.28 11832.04 1642.20 649460.05 00:16:04.236 ======================================================== 00:16:04.236 Total : 12113.38 5.91 15943.39 1642.20 1105108.79 00:16:04.236 00:16:04.236 true 00:16:04.236 14:30:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67929 00:16:04.236 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67929) - No such process 00:16:04.236 14:30:11 -- target/ns_hotplug_stress.sh@53 -- # wait 67929 00:16:04.236 14:30:11 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.494 14:30:11 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.752 14:30:11 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:04.752 14:30:11 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:04.752 14:30:11 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:04.752 14:30:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.752 14:30:11 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:05.335 null0 00:16:05.335 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.335 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.335 14:30:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:05.335 null1 00:16:05.335 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.335 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.335 14:30:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:05.901 null2 00:16:05.901 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.901 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.901 14:30:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:05.901 null3 00:16:06.159 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:06.159 14:30:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:06.159 14:30:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:06.418 null4 00:16:06.418 14:30:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:06.418 14:30:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:06.418 14:30:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:06.676 null5 00:16:06.676 14:30:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:06.676 14:30:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:06.676 14:30:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:06.934 null6 00:16:06.934 14:30:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:06.934 14:30:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:06.934 14:30:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:07.194 null7 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.194 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@66 -- # wait 68910 68911 68913 68915 68918 68919 68921 68922 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.195 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.454 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.454 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.454 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.713 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.972 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.231 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.231 14:30:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.231 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.231 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:08.231 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.231 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.489 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.748 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.006 14:30:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:09.265 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:09.524 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:09.783 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.042 14:30:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:10.301 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:10.560 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.819 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.077 14:30:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:11.335 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:11.593 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.593 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:11.594 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.852 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.110 14:30:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:12.110 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:12.111 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.111 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.111 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.111 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.368 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.625 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.883 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.142 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:13.142 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:13.142 14:30:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:13.142 14:30:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:13.142 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.142 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.142 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.142 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.142 14:30:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.400 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.657 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:13.657 14:30:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:13.657 14:30:20 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:13.657 14:30:20 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:13.657 14:30:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.657 14:30:20 -- nvmf/common.sh@116 -- # sync 00:16:13.657 14:30:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.657 14:30:20 -- nvmf/common.sh@119 -- # set +e 00:16:13.657 14:30:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.657 14:30:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.657 rmmod nvme_tcp 00:16:13.657 rmmod nvme_fabrics 00:16:13.657 rmmod nvme_keyring 00:16:13.657 14:30:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.657 14:30:20 -- nvmf/common.sh@123 -- # set -e 00:16:13.657 14:30:20 -- nvmf/common.sh@124 -- # return 0 00:16:13.657 14:30:20 -- nvmf/common.sh@477 -- # '[' -n 67786 ']' 00:16:13.657 14:30:20 -- nvmf/common.sh@478 -- # killprocess 67786 00:16:13.657 14:30:20 -- common/autotest_common.sh@936 -- # '[' -z 67786 ']' 00:16:13.657 14:30:20 -- common/autotest_common.sh@940 -- # kill -0 67786 00:16:13.657 14:30:20 -- common/autotest_common.sh@941 -- # uname 00:16:13.657 14:30:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.657 14:30:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67786 00:16:13.657 14:30:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:13.657 killing process with pid 67786 00:16:13.657 14:30:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:13.657 14:30:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67786' 00:16:13.657 14:30:20 -- common/autotest_common.sh@955 -- # kill 67786 00:16:13.658 14:30:20 -- common/autotest_common.sh@960 -- # wait 67786 00:16:13.977 14:30:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.977 14:30:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.977 14:30:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.977 14:30:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.977 14:30:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.977 14:30:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.977 14:30:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.977 14:30:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.977 14:30:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:13.977 ************************************ 00:16:13.977 END TEST nvmf_ns_hotplug_stress 00:16:13.977 ************************************ 00:16:13.977 00:16:13.977 real 0m44.291s 00:16:13.977 user 3m36.136s 00:16:13.977 sys 0m13.358s 00:16:13.977 14:30:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:13.977 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:13.977 14:30:20 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:13.977 14:30:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.977 14:30:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.977 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:13.977 ************************************ 00:16:13.977 START TEST nvmf_connect_stress 00:16:13.977 ************************************ 00:16:13.977 14:30:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:13.977 * Looking for test storage... 00:16:14.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:14.235 14:30:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:14.235 14:30:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:14.235 14:30:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:14.235 14:30:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:14.235 14:30:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:14.235 14:30:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:14.235 14:30:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:14.235 14:30:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:14.235 14:30:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:14.235 14:30:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.235 14:30:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:14.235 14:30:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:14.235 14:30:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:14.235 14:30:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:14.235 14:30:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:14.235 14:30:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:14.235 14:30:21 -- scripts/common.sh@344 -- # : 1 00:16:14.235 14:30:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:14.235 14:30:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.235 14:30:21 -- scripts/common.sh@364 -- # decimal 1 00:16:14.235 14:30:21 -- scripts/common.sh@352 -- # local d=1 00:16:14.235 14:30:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.235 14:30:21 -- scripts/common.sh@354 -- # echo 1 00:16:14.235 14:30:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:14.235 14:30:21 -- scripts/common.sh@365 -- # decimal 2 00:16:14.235 14:30:21 -- scripts/common.sh@352 -- # local d=2 00:16:14.235 14:30:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.235 14:30:21 -- scripts/common.sh@354 -- # echo 2 00:16:14.235 14:30:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:14.235 14:30:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:14.235 14:30:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:14.235 14:30:21 -- scripts/common.sh@367 -- # return 0 00:16:14.235 14:30:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.235 14:30:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.235 --rc genhtml_branch_coverage=1 00:16:14.235 --rc genhtml_function_coverage=1 00:16:14.235 --rc genhtml_legend=1 00:16:14.235 --rc geninfo_all_blocks=1 00:16:14.235 --rc geninfo_unexecuted_blocks=1 00:16:14.235 00:16:14.235 ' 00:16:14.235 14:30:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.235 --rc genhtml_branch_coverage=1 00:16:14.235 --rc genhtml_function_coverage=1 00:16:14.235 --rc genhtml_legend=1 00:16:14.235 --rc geninfo_all_blocks=1 00:16:14.235 --rc geninfo_unexecuted_blocks=1 00:16:14.235 00:16:14.235 ' 00:16:14.235 14:30:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.235 --rc genhtml_branch_coverage=1 00:16:14.235 --rc genhtml_function_coverage=1 00:16:14.235 --rc genhtml_legend=1 00:16:14.235 --rc geninfo_all_blocks=1 00:16:14.235 --rc geninfo_unexecuted_blocks=1 00:16:14.235 00:16:14.235 ' 00:16:14.235 14:30:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.235 --rc genhtml_branch_coverage=1 00:16:14.235 --rc genhtml_function_coverage=1 00:16:14.235 --rc genhtml_legend=1 00:16:14.235 --rc geninfo_all_blocks=1 00:16:14.235 --rc geninfo_unexecuted_blocks=1 00:16:14.235 00:16:14.235 ' 00:16:14.235 14:30:21 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.235 14:30:21 -- nvmf/common.sh@7 -- # uname -s 00:16:14.235 14:30:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.235 14:30:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.235 14:30:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.235 14:30:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.235 14:30:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.235 14:30:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.235 14:30:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.236 14:30:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.236 14:30:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.236 14:30:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.236 14:30:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:14.236 14:30:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:14.236 14:30:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.236 14:30:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.236 14:30:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.236 14:30:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.236 14:30:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.236 14:30:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.236 14:30:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.236 14:30:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.236 14:30:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.236 14:30:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.236 14:30:21 -- paths/export.sh@5 -- # export PATH 00:16:14.236 14:30:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.236 14:30:21 -- nvmf/common.sh@46 -- # : 0 00:16:14.236 14:30:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.236 14:30:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.236 14:30:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.236 14:30:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.236 14:30:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.236 14:30:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.236 14:30:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.236 14:30:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.236 14:30:21 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:14.236 14:30:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.236 14:30:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.236 14:30:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.236 14:30:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.236 14:30:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.236 14:30:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.236 14:30:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.236 14:30:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.236 14:30:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.236 14:30:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.236 14:30:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.236 14:30:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.236 14:30:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.236 14:30:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.236 14:30:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.236 14:30:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.236 14:30:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.236 14:30:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.236 14:30:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.236 14:30:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.236 14:30:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.236 14:30:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.236 14:30:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.236 14:30:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.236 14:30:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.236 14:30:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.236 14:30:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.236 14:30:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.236 Cannot find device "nvmf_tgt_br" 00:16:14.236 14:30:21 -- nvmf/common.sh@154 -- # true 00:16:14.236 14:30:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.236 Cannot find device "nvmf_tgt_br2" 00:16:14.236 14:30:21 -- nvmf/common.sh@155 -- # true 00:16:14.236 14:30:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.236 14:30:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.236 Cannot find device "nvmf_tgt_br" 00:16:14.236 14:30:21 -- nvmf/common.sh@157 -- # true 00:16:14.236 14:30:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.236 Cannot find device "nvmf_tgt_br2" 00:16:14.236 14:30:21 -- nvmf/common.sh@158 -- # true 00:16:14.236 14:30:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.236 14:30:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.236 14:30:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.236 14:30:21 -- nvmf/common.sh@161 -- # true 00:16:14.236 14:30:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.236 14:30:21 -- nvmf/common.sh@162 -- # true 00:16:14.236 14:30:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.492 14:30:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.492 14:30:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.492 14:30:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.492 14:30:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.492 14:30:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.492 14:30:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.492 14:30:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.492 14:30:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.492 14:30:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.492 14:30:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.492 14:30:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.492 14:30:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.492 14:30:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.492 14:30:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.492 14:30:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.492 14:30:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.492 14:30:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.492 14:30:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.492 14:30:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.492 14:30:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.492 14:30:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.492 14:30:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.492 14:30:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:14.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:14.492 00:16:14.492 --- 10.0.0.2 ping statistics --- 00:16:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.492 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:14.493 14:30:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:14.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:14.493 00:16:14.493 --- 10.0.0.3 ping statistics --- 00:16:14.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.493 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:14.493 14:30:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:14.493 00:16:14.493 --- 10.0.0.1 ping statistics --- 00:16:14.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.493 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:14.493 14:30:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.493 14:30:21 -- nvmf/common.sh@421 -- # return 0 00:16:14.493 14:30:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.493 14:30:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.493 14:30:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.493 14:30:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.493 14:30:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.493 14:30:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.493 14:30:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.493 14:30:21 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:14.493 14:30:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.493 14:30:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.493 14:30:21 -- common/autotest_common.sh@10 -- # set +x 00:16:14.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.493 14:30:21 -- nvmf/common.sh@469 -- # nvmfpid=70275 00:16:14.493 14:30:21 -- nvmf/common.sh@470 -- # waitforlisten 70275 00:16:14.493 14:30:21 -- common/autotest_common.sh@829 -- # '[' -z 70275 ']' 00:16:14.493 14:30:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.493 14:30:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.493 14:30:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.493 14:30:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.493 14:30:21 -- common/autotest_common.sh@10 -- # set +x 00:16:14.493 14:30:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:14.750 [2024-12-06 14:30:21.504943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.750 [2024-12-06 14:30:21.505346] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.750 [2024-12-06 14:30:21.647157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.007 [2024-12-06 14:30:21.786247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:15.007 [2024-12-06 14:30:21.786453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.007 [2024-12-06 14:30:21.786472] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.007 [2024-12-06 14:30:21.786494] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.007 [2024-12-06 14:30:21.786601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.007 [2024-12-06 14:30:21.786869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.007 [2024-12-06 14:30:21.786878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.938 14:30:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.938 14:30:22 -- common/autotest_common.sh@862 -- # return 0 00:16:15.938 14:30:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.938 14:30:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.938 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 14:30:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.938 14:30:22 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.938 14:30:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.938 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 [2024-12-06 14:30:22.609919] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.938 14:30:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.938 14:30:22 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.938 14:30:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.938 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 14:30:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.938 14:30:22 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.938 14:30:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.938 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 [2024-12-06 14:30:22.627892] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.938 14:30:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.938 14:30:22 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:15.938 14:30:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.938 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.938 NULL1 00:16:15.938 14:30:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.938 14:30:22 -- target/connect_stress.sh@21 -- # PERF_PID=70327 00:16:15.938 14:30:22 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:15.938 14:30:22 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:15.938 14:30:22 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.938 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.938 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.939 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.939 14:30:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.939 14:30:22 -- target/connect_stress.sh@28 -- # cat 00:16:15.939 14:30:22 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:15.939 14:30:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.939 14:30:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.939 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:16:16.196 14:30:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.196 14:30:23 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:16.196 14:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.196 14:30:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.196 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:16:16.453 14:30:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.453 14:30:23 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:16.453 14:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.453 14:30:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.453 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:16:16.711 14:30:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.711 14:30:23 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:16.711 14:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.711 14:30:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.711 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.278 14:30:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.278 14:30:23 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:17.278 14:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.278 14:30:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.278 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.536 14:30:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.536 14:30:24 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:17.536 14:30:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.536 14:30:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.536 14:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:17.794 14:30:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.794 14:30:24 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:17.794 14:30:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.794 14:30:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.794 14:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:18.053 14:30:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.053 14:30:24 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:18.053 14:30:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.053 14:30:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.053 14:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:18.319 14:30:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.319 14:30:25 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:18.319 14:30:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.319 14:30:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.319 14:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.888 14:30:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.888 14:30:25 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:18.888 14:30:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.888 14:30:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.888 14:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.145 14:30:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.145 14:30:25 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:19.145 14:30:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.145 14:30:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.145 14:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.403 14:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.403 14:30:26 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:19.403 14:30:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.403 14:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.403 14:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:19.661 14:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.661 14:30:26 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:19.661 14:30:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.661 14:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.661 14:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.227 14:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.227 14:30:26 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:20.227 14:30:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.227 14:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.227 14:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.485 14:30:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.485 14:30:27 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:20.485 14:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.486 14:30:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.486 14:30:27 -- common/autotest_common.sh@10 -- # set +x 00:16:20.745 14:30:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.745 14:30:27 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:20.745 14:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.745 14:30:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.745 14:30:27 -- common/autotest_common.sh@10 -- # set +x 00:16:21.002 14:30:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.002 14:30:27 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:21.002 14:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.002 14:30:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.002 14:30:27 -- common/autotest_common.sh@10 -- # set +x 00:16:21.259 14:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.259 14:30:28 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:21.259 14:30:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.259 14:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.259 14:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:21.826 14:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.826 14:30:28 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:21.826 14:30:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.826 14:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.826 14:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:22.084 14:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.084 14:30:28 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:22.084 14:30:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.084 14:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.084 14:30:28 -- common/autotest_common.sh@10 -- # set +x 00:16:22.342 14:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.342 14:30:29 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:22.342 14:30:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.342 14:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.342 14:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:22.600 14:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.600 14:30:29 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:22.600 14:30:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.600 14:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.600 14:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 14:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.860 14:30:29 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:22.860 14:30:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.860 14:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.860 14:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 14:30:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.427 14:30:30 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:23.427 14:30:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.427 14:30:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.427 14:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:23.684 14:30:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.684 14:30:30 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:23.684 14:30:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.684 14:30:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.684 14:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:23.942 14:30:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.942 14:30:30 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:23.942 14:30:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.942 14:30:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.942 14:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.201 14:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.201 14:30:31 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:24.201 14:30:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.201 14:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.201 14:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 14:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.459 14:30:31 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:24.459 14:30:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.459 14:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.459 14:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:25.029 14:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.029 14:30:31 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:25.029 14:30:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.029 14:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.029 14:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:25.290 14:30:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.290 14:30:32 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:25.290 14:30:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.290 14:30:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.290 14:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:25.548 14:30:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.548 14:30:32 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:25.548 14:30:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.548 14:30:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.548 14:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:25.806 14:30:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.806 14:30:32 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:25.806 14:30:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.806 14:30:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.806 14:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:26.064 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.064 14:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.064 14:30:33 -- target/connect_stress.sh@34 -- # kill -0 70327 00:16:26.064 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70327) - No such process 00:16:26.064 14:30:33 -- target/connect_stress.sh@38 -- # wait 70327 00:16:26.064 14:30:33 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:26.064 14:30:33 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:26.064 14:30:33 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:26.064 14:30:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.064 14:30:33 -- nvmf/common.sh@116 -- # sync 00:16:26.322 14:30:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.322 14:30:33 -- nvmf/common.sh@119 -- # set +e 00:16:26.322 14:30:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.322 14:30:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.322 rmmod nvme_tcp 00:16:26.322 rmmod nvme_fabrics 00:16:26.322 rmmod nvme_keyring 00:16:26.322 14:30:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:26.322 14:30:33 -- nvmf/common.sh@123 -- # set -e 00:16:26.322 14:30:33 -- nvmf/common.sh@124 -- # return 0 00:16:26.322 14:30:33 -- nvmf/common.sh@477 -- # '[' -n 70275 ']' 00:16:26.322 14:30:33 -- nvmf/common.sh@478 -- # killprocess 70275 00:16:26.322 14:30:33 -- common/autotest_common.sh@936 -- # '[' -z 70275 ']' 00:16:26.322 14:30:33 -- common/autotest_common.sh@940 -- # kill -0 70275 00:16:26.322 14:30:33 -- common/autotest_common.sh@941 -- # uname 00:16:26.322 14:30:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.322 14:30:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70275 00:16:26.322 killing process with pid 70275 00:16:26.322 14:30:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:26.322 14:30:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:26.322 14:30:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70275' 00:16:26.322 14:30:33 -- common/autotest_common.sh@955 -- # kill 70275 00:16:26.322 14:30:33 -- common/autotest_common.sh@960 -- # wait 70275 00:16:26.580 14:30:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:26.580 14:30:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:26.580 14:30:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:26.580 14:30:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.580 14:30:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:26.580 14:30:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.580 14:30:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.580 14:30:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.580 14:30:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:26.580 ************************************ 00:16:26.580 END TEST nvmf_connect_stress 00:16:26.580 ************************************ 00:16:26.580 00:16:26.580 real 0m12.590s 00:16:26.580 user 0m41.730s 00:16:26.580 sys 0m3.060s 00:16:26.580 14:30:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:26.580 14:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:26.580 14:30:33 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:26.580 14:30:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:26.580 14:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.580 14:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:26.580 ************************************ 00:16:26.580 START TEST nvmf_fused_ordering 00:16:26.580 ************************************ 00:16:26.580 14:30:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:26.839 * Looking for test storage... 00:16:26.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.839 14:30:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:26.839 14:30:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:26.839 14:30:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:26.839 14:30:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:26.839 14:30:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:26.839 14:30:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:26.839 14:30:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:26.839 14:30:33 -- scripts/common.sh@335 -- # IFS=.-: 00:16:26.839 14:30:33 -- scripts/common.sh@335 -- # read -ra ver1 00:16:26.839 14:30:33 -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.839 14:30:33 -- scripts/common.sh@336 -- # read -ra ver2 00:16:26.839 14:30:33 -- scripts/common.sh@337 -- # local 'op=<' 00:16:26.839 14:30:33 -- scripts/common.sh@339 -- # ver1_l=2 00:16:26.839 14:30:33 -- scripts/common.sh@340 -- # ver2_l=1 00:16:26.839 14:30:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:26.839 14:30:33 -- scripts/common.sh@343 -- # case "$op" in 00:16:26.839 14:30:33 -- scripts/common.sh@344 -- # : 1 00:16:26.839 14:30:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:26.839 14:30:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.839 14:30:33 -- scripts/common.sh@364 -- # decimal 1 00:16:26.839 14:30:33 -- scripts/common.sh@352 -- # local d=1 00:16:26.839 14:30:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.839 14:30:33 -- scripts/common.sh@354 -- # echo 1 00:16:26.839 14:30:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:26.839 14:30:33 -- scripts/common.sh@365 -- # decimal 2 00:16:26.839 14:30:33 -- scripts/common.sh@352 -- # local d=2 00:16:26.839 14:30:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.839 14:30:33 -- scripts/common.sh@354 -- # echo 2 00:16:26.839 14:30:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:26.839 14:30:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:26.839 14:30:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:26.839 14:30:33 -- scripts/common.sh@367 -- # return 0 00:16:26.839 14:30:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.839 14:30:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:26.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.839 --rc genhtml_branch_coverage=1 00:16:26.839 --rc genhtml_function_coverage=1 00:16:26.839 --rc genhtml_legend=1 00:16:26.839 --rc geninfo_all_blocks=1 00:16:26.839 --rc geninfo_unexecuted_blocks=1 00:16:26.840 00:16:26.840 ' 00:16:26.840 14:30:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.840 --rc genhtml_branch_coverage=1 00:16:26.840 --rc genhtml_function_coverage=1 00:16:26.840 --rc genhtml_legend=1 00:16:26.840 --rc geninfo_all_blocks=1 00:16:26.840 --rc geninfo_unexecuted_blocks=1 00:16:26.840 00:16:26.840 ' 00:16:26.840 14:30:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.840 --rc genhtml_branch_coverage=1 00:16:26.840 --rc genhtml_function_coverage=1 00:16:26.840 --rc genhtml_legend=1 00:16:26.840 --rc geninfo_all_blocks=1 00:16:26.840 --rc geninfo_unexecuted_blocks=1 00:16:26.840 00:16:26.840 ' 00:16:26.840 14:30:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.840 --rc genhtml_branch_coverage=1 00:16:26.840 --rc genhtml_function_coverage=1 00:16:26.840 --rc genhtml_legend=1 00:16:26.840 --rc geninfo_all_blocks=1 00:16:26.840 --rc geninfo_unexecuted_blocks=1 00:16:26.840 00:16:26.840 ' 00:16:26.840 14:30:33 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.840 14:30:33 -- nvmf/common.sh@7 -- # uname -s 00:16:26.840 14:30:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.840 14:30:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.840 14:30:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.840 14:30:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.840 14:30:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.840 14:30:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.840 14:30:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.840 14:30:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.840 14:30:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.840 14:30:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.840 14:30:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:26.840 14:30:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:26.840 14:30:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.840 14:30:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.840 14:30:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.840 14:30:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.840 14:30:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.840 14:30:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.840 14:30:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.840 14:30:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.840 14:30:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.840 14:30:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.840 14:30:33 -- paths/export.sh@5 -- # export PATH 00:16:26.840 14:30:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.840 14:30:33 -- nvmf/common.sh@46 -- # : 0 00:16:26.840 14:30:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:26.840 14:30:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:26.840 14:30:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:26.840 14:30:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.840 14:30:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.840 14:30:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:26.840 14:30:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:26.840 14:30:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:26.840 14:30:33 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:26.840 14:30:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:26.840 14:30:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.840 14:30:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:26.840 14:30:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:26.840 14:30:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:26.840 14:30:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.840 14:30:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.840 14:30:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.840 14:30:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:26.840 14:30:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:26.840 14:30:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:26.840 14:30:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:26.840 14:30:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:26.840 14:30:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:26.840 14:30:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.840 14:30:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.840 14:30:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.840 14:30:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:26.840 14:30:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.840 14:30:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.840 14:30:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.840 14:30:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.840 14:30:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.840 14:30:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.840 14:30:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.840 14:30:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.840 14:30:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:26.840 14:30:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:26.840 Cannot find device "nvmf_tgt_br" 00:16:26.840 14:30:33 -- nvmf/common.sh@154 -- # true 00:16:26.840 14:30:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.840 Cannot find device "nvmf_tgt_br2" 00:16:26.840 14:30:33 -- nvmf/common.sh@155 -- # true 00:16:26.840 14:30:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:26.840 14:30:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:27.098 Cannot find device "nvmf_tgt_br" 00:16:27.098 14:30:33 -- nvmf/common.sh@157 -- # true 00:16:27.098 14:30:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:27.098 Cannot find device "nvmf_tgt_br2" 00:16:27.098 14:30:33 -- nvmf/common.sh@158 -- # true 00:16:27.098 14:30:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:27.098 14:30:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:27.098 14:30:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.098 14:30:33 -- nvmf/common.sh@161 -- # true 00:16:27.098 14:30:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.098 14:30:33 -- nvmf/common.sh@162 -- # true 00:16:27.098 14:30:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.098 14:30:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.098 14:30:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.098 14:30:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.098 14:30:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.098 14:30:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.098 14:30:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.098 14:30:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.098 14:30:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.098 14:30:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:27.099 14:30:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:27.099 14:30:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:27.099 14:30:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:27.099 14:30:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.099 14:30:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.099 14:30:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.099 14:30:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:27.099 14:30:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:27.099 14:30:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.099 14:30:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.099 14:30:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.356 14:30:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.356 14:30:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.356 14:30:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:27.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:16:27.356 00:16:27.356 --- 10.0.0.2 ping statistics --- 00:16:27.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.356 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:16:27.356 14:30:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:27.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:27.356 00:16:27.356 --- 10.0.0.3 ping statistics --- 00:16:27.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.356 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:27.356 14:30:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:27.356 00:16:27.356 --- 10.0.0.1 ping statistics --- 00:16:27.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.356 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:27.356 14:30:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.357 14:30:34 -- nvmf/common.sh@421 -- # return 0 00:16:27.357 14:30:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.357 14:30:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.357 14:30:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:27.357 14:30:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:27.357 14:30:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.357 14:30:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:27.357 14:30:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:27.357 14:30:34 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:27.357 14:30:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.357 14:30:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.357 14:30:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.357 14:30:34 -- nvmf/common.sh@469 -- # nvmfpid=70663 00:16:27.357 14:30:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.357 14:30:34 -- nvmf/common.sh@470 -- # waitforlisten 70663 00:16:27.357 14:30:34 -- common/autotest_common.sh@829 -- # '[' -z 70663 ']' 00:16:27.357 14:30:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.357 14:30:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.357 14:30:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.357 14:30:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.357 14:30:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.357 [2024-12-06 14:30:34.201465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:27.357 [2024-12-06 14:30:34.202525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.615 [2024-12-06 14:30:34.346215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.615 [2024-12-06 14:30:34.485227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:27.615 [2024-12-06 14:30:34.485708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.615 [2024-12-06 14:30:34.485857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.615 [2024-12-06 14:30:34.486070] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.615 [2024-12-06 14:30:34.486212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.551 14:30:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.551 14:30:35 -- common/autotest_common.sh@862 -- # return 0 00:16:28.551 14:30:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:28.551 14:30:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 14:30:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.551 14:30:35 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.551 14:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 [2024-12-06 14:30:35.260443] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.551 14:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.551 14:30:35 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:28.551 14:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 14:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.551 14:30:35 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.551 14:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 [2024-12-06 14:30:35.276575] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.551 14:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.551 14:30:35 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:28.551 14:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 NULL1 00:16:28.551 14:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.551 14:30:35 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:28.551 14:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 14:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.551 14:30:35 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:28.551 14:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.551 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 14:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.551 14:30:35 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:28.551 [2024-12-06 14:30:35.331564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.551 [2024-12-06 14:30:35.331631] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70713 ] 00:16:28.810 Attached to nqn.2016-06.io.spdk:cnode1 00:16:28.810 Namespace ID: 1 size: 1GB 00:16:28.810 fused_ordering(0) 00:16:28.810 fused_ordering(1) 00:16:28.810 fused_ordering(2) 00:16:28.810 fused_ordering(3) 00:16:28.810 fused_ordering(4) 00:16:28.810 fused_ordering(5) 00:16:28.810 fused_ordering(6) 00:16:28.810 fused_ordering(7) 00:16:28.810 fused_ordering(8) 00:16:28.810 fused_ordering(9) 00:16:28.810 fused_ordering(10) 00:16:28.810 fused_ordering(11) 00:16:28.810 fused_ordering(12) 00:16:28.810 fused_ordering(13) 00:16:28.810 fused_ordering(14) 00:16:28.810 fused_ordering(15) 00:16:28.810 fused_ordering(16) 00:16:28.810 fused_ordering(17) 00:16:28.810 fused_ordering(18) 00:16:28.810 fused_ordering(19) 00:16:28.810 fused_ordering(20) 00:16:28.810 fused_ordering(21) 00:16:28.810 fused_ordering(22) 00:16:28.810 fused_ordering(23) 00:16:28.810 fused_ordering(24) 00:16:28.810 fused_ordering(25) 00:16:28.810 fused_ordering(26) 00:16:28.810 fused_ordering(27) 00:16:28.810 fused_ordering(28) 00:16:28.810 fused_ordering(29) 00:16:28.810 fused_ordering(30) 00:16:28.810 fused_ordering(31) 00:16:28.810 fused_ordering(32) 00:16:28.810 fused_ordering(33) 00:16:28.810 fused_ordering(34) 00:16:28.810 fused_ordering(35) 00:16:28.810 fused_ordering(36) 00:16:28.810 fused_ordering(37) 00:16:28.810 fused_ordering(38) 00:16:28.810 fused_ordering(39) 00:16:28.810 fused_ordering(40) 00:16:28.810 fused_ordering(41) 00:16:28.810 fused_ordering(42) 00:16:28.810 fused_ordering(43) 00:16:28.810 fused_ordering(44) 00:16:28.810 fused_ordering(45) 00:16:28.810 fused_ordering(46) 00:16:28.810 fused_ordering(47) 00:16:28.810 fused_ordering(48) 00:16:28.810 fused_ordering(49) 00:16:28.810 fused_ordering(50) 00:16:28.810 fused_ordering(51) 00:16:28.810 fused_ordering(52) 00:16:28.810 fused_ordering(53) 00:16:28.810 fused_ordering(54) 00:16:28.810 fused_ordering(55) 00:16:28.810 fused_ordering(56) 00:16:28.810 fused_ordering(57) 00:16:28.810 fused_ordering(58) 00:16:28.810 fused_ordering(59) 00:16:28.810 fused_ordering(60) 00:16:28.810 fused_ordering(61) 00:16:28.810 fused_ordering(62) 00:16:28.810 fused_ordering(63) 00:16:28.810 fused_ordering(64) 00:16:28.810 fused_ordering(65) 00:16:28.810 fused_ordering(66) 00:16:28.810 fused_ordering(67) 00:16:28.810 fused_ordering(68) 00:16:28.810 fused_ordering(69) 00:16:28.810 fused_ordering(70) 00:16:28.810 fused_ordering(71) 00:16:28.810 fused_ordering(72) 00:16:28.810 fused_ordering(73) 00:16:28.810 fused_ordering(74) 00:16:28.810 fused_ordering(75) 00:16:28.810 fused_ordering(76) 00:16:28.810 fused_ordering(77) 00:16:28.810 fused_ordering(78) 00:16:28.810 fused_ordering(79) 00:16:28.810 fused_ordering(80) 00:16:28.810 fused_ordering(81) 00:16:28.810 fused_ordering(82) 00:16:28.810 fused_ordering(83) 00:16:28.810 fused_ordering(84) 00:16:28.810 fused_ordering(85) 00:16:28.810 fused_ordering(86) 00:16:28.810 fused_ordering(87) 00:16:28.810 fused_ordering(88) 00:16:28.810 fused_ordering(89) 00:16:28.810 fused_ordering(90) 00:16:28.810 fused_ordering(91) 00:16:28.810 fused_ordering(92) 00:16:28.810 fused_ordering(93) 00:16:28.810 fused_ordering(94) 00:16:28.810 fused_ordering(95) 00:16:28.810 fused_ordering(96) 00:16:28.810 fused_ordering(97) 00:16:28.810 fused_ordering(98) 00:16:28.810 fused_ordering(99) 00:16:28.810 fused_ordering(100) 00:16:28.810 fused_ordering(101) 00:16:28.810 fused_ordering(102) 00:16:28.810 fused_ordering(103) 00:16:28.810 fused_ordering(104) 00:16:28.810 fused_ordering(105) 00:16:28.810 fused_ordering(106) 00:16:28.810 fused_ordering(107) 00:16:28.810 fused_ordering(108) 00:16:28.810 fused_ordering(109) 00:16:28.810 fused_ordering(110) 00:16:28.810 fused_ordering(111) 00:16:28.810 fused_ordering(112) 00:16:28.810 fused_ordering(113) 00:16:28.810 fused_ordering(114) 00:16:28.810 fused_ordering(115) 00:16:28.810 fused_ordering(116) 00:16:28.810 fused_ordering(117) 00:16:28.810 fused_ordering(118) 00:16:28.810 fused_ordering(119) 00:16:28.810 fused_ordering(120) 00:16:28.810 fused_ordering(121) 00:16:28.810 fused_ordering(122) 00:16:28.810 fused_ordering(123) 00:16:28.810 fused_ordering(124) 00:16:28.810 fused_ordering(125) 00:16:28.810 fused_ordering(126) 00:16:28.810 fused_ordering(127) 00:16:28.810 fused_ordering(128) 00:16:28.810 fused_ordering(129) 00:16:28.810 fused_ordering(130) 00:16:28.810 fused_ordering(131) 00:16:28.810 fused_ordering(132) 00:16:28.810 fused_ordering(133) 00:16:28.810 fused_ordering(134) 00:16:28.810 fused_ordering(135) 00:16:28.810 fused_ordering(136) 00:16:28.810 fused_ordering(137) 00:16:28.810 fused_ordering(138) 00:16:28.810 fused_ordering(139) 00:16:28.810 fused_ordering(140) 00:16:28.810 fused_ordering(141) 00:16:28.810 fused_ordering(142) 00:16:28.810 fused_ordering(143) 00:16:28.810 fused_ordering(144) 00:16:28.810 fused_ordering(145) 00:16:28.810 fused_ordering(146) 00:16:28.810 fused_ordering(147) 00:16:28.810 fused_ordering(148) 00:16:28.810 fused_ordering(149) 00:16:28.810 fused_ordering(150) 00:16:28.810 fused_ordering(151) 00:16:28.810 fused_ordering(152) 00:16:28.810 fused_ordering(153) 00:16:28.810 fused_ordering(154) 00:16:28.810 fused_ordering(155) 00:16:28.810 fused_ordering(156) 00:16:28.810 fused_ordering(157) 00:16:28.810 fused_ordering(158) 00:16:28.810 fused_ordering(159) 00:16:28.810 fused_ordering(160) 00:16:28.810 fused_ordering(161) 00:16:28.810 fused_ordering(162) 00:16:28.810 fused_ordering(163) 00:16:28.810 fused_ordering(164) 00:16:28.810 fused_ordering(165) 00:16:28.810 fused_ordering(166) 00:16:28.810 fused_ordering(167) 00:16:28.810 fused_ordering(168) 00:16:28.810 fused_ordering(169) 00:16:28.810 fused_ordering(170) 00:16:28.810 fused_ordering(171) 00:16:28.810 fused_ordering(172) 00:16:28.810 fused_ordering(173) 00:16:28.810 fused_ordering(174) 00:16:28.810 fused_ordering(175) 00:16:28.810 fused_ordering(176) 00:16:28.810 fused_ordering(177) 00:16:28.810 fused_ordering(178) 00:16:28.810 fused_ordering(179) 00:16:28.810 fused_ordering(180) 00:16:28.810 fused_ordering(181) 00:16:28.810 fused_ordering(182) 00:16:28.810 fused_ordering(183) 00:16:28.810 fused_ordering(184) 00:16:28.810 fused_ordering(185) 00:16:28.810 fused_ordering(186) 00:16:28.810 fused_ordering(187) 00:16:28.810 fused_ordering(188) 00:16:28.810 fused_ordering(189) 00:16:28.810 fused_ordering(190) 00:16:28.810 fused_ordering(191) 00:16:28.810 fused_ordering(192) 00:16:28.810 fused_ordering(193) 00:16:28.810 fused_ordering(194) 00:16:28.810 fused_ordering(195) 00:16:28.810 fused_ordering(196) 00:16:28.810 fused_ordering(197) 00:16:28.810 fused_ordering(198) 00:16:28.810 fused_ordering(199) 00:16:28.810 fused_ordering(200) 00:16:28.810 fused_ordering(201) 00:16:28.810 fused_ordering(202) 00:16:28.810 fused_ordering(203) 00:16:28.810 fused_ordering(204) 00:16:28.810 fused_ordering(205) 00:16:29.377 fused_ordering(206) 00:16:29.377 fused_ordering(207) 00:16:29.377 fused_ordering(208) 00:16:29.377 fused_ordering(209) 00:16:29.377 fused_ordering(210) 00:16:29.377 fused_ordering(211) 00:16:29.377 fused_ordering(212) 00:16:29.377 fused_ordering(213) 00:16:29.377 fused_ordering(214) 00:16:29.377 fused_ordering(215) 00:16:29.377 fused_ordering(216) 00:16:29.377 fused_ordering(217) 00:16:29.377 fused_ordering(218) 00:16:29.377 fused_ordering(219) 00:16:29.377 fused_ordering(220) 00:16:29.377 fused_ordering(221) 00:16:29.377 fused_ordering(222) 00:16:29.377 fused_ordering(223) 00:16:29.377 fused_ordering(224) 00:16:29.377 fused_ordering(225) 00:16:29.377 fused_ordering(226) 00:16:29.377 fused_ordering(227) 00:16:29.377 fused_ordering(228) 00:16:29.377 fused_ordering(229) 00:16:29.377 fused_ordering(230) 00:16:29.377 fused_ordering(231) 00:16:29.377 fused_ordering(232) 00:16:29.377 fused_ordering(233) 00:16:29.377 fused_ordering(234) 00:16:29.377 fused_ordering(235) 00:16:29.377 fused_ordering(236) 00:16:29.377 fused_ordering(237) 00:16:29.377 fused_ordering(238) 00:16:29.377 fused_ordering(239) 00:16:29.377 fused_ordering(240) 00:16:29.377 fused_ordering(241) 00:16:29.377 fused_ordering(242) 00:16:29.377 fused_ordering(243) 00:16:29.377 fused_ordering(244) 00:16:29.377 fused_ordering(245) 00:16:29.377 fused_ordering(246) 00:16:29.377 fused_ordering(247) 00:16:29.377 fused_ordering(248) 00:16:29.377 fused_ordering(249) 00:16:29.377 fused_ordering(250) 00:16:29.377 fused_ordering(251) 00:16:29.377 fused_ordering(252) 00:16:29.377 fused_ordering(253) 00:16:29.377 fused_ordering(254) 00:16:29.377 fused_ordering(255) 00:16:29.377 fused_ordering(256) 00:16:29.377 fused_ordering(257) 00:16:29.377 fused_ordering(258) 00:16:29.377 fused_ordering(259) 00:16:29.377 fused_ordering(260) 00:16:29.377 fused_ordering(261) 00:16:29.377 fused_ordering(262) 00:16:29.377 fused_ordering(263) 00:16:29.377 fused_ordering(264) 00:16:29.377 fused_ordering(265) 00:16:29.377 fused_ordering(266) 00:16:29.377 fused_ordering(267) 00:16:29.377 fused_ordering(268) 00:16:29.377 fused_ordering(269) 00:16:29.377 fused_ordering(270) 00:16:29.377 fused_ordering(271) 00:16:29.377 fused_ordering(272) 00:16:29.377 fused_ordering(273) 00:16:29.377 fused_ordering(274) 00:16:29.377 fused_ordering(275) 00:16:29.377 fused_ordering(276) 00:16:29.377 fused_ordering(277) 00:16:29.377 fused_ordering(278) 00:16:29.377 fused_ordering(279) 00:16:29.377 fused_ordering(280) 00:16:29.377 fused_ordering(281) 00:16:29.377 fused_ordering(282) 00:16:29.377 fused_ordering(283) 00:16:29.377 fused_ordering(284) 00:16:29.377 fused_ordering(285) 00:16:29.377 fused_ordering(286) 00:16:29.377 fused_ordering(287) 00:16:29.377 fused_ordering(288) 00:16:29.377 fused_ordering(289) 00:16:29.377 fused_ordering(290) 00:16:29.377 fused_ordering(291) 00:16:29.377 fused_ordering(292) 00:16:29.377 fused_ordering(293) 00:16:29.377 fused_ordering(294) 00:16:29.377 fused_ordering(295) 00:16:29.377 fused_ordering(296) 00:16:29.377 fused_ordering(297) 00:16:29.377 fused_ordering(298) 00:16:29.377 fused_ordering(299) 00:16:29.377 fused_ordering(300) 00:16:29.377 fused_ordering(301) 00:16:29.377 fused_ordering(302) 00:16:29.377 fused_ordering(303) 00:16:29.377 fused_ordering(304) 00:16:29.377 fused_ordering(305) 00:16:29.377 fused_ordering(306) 00:16:29.377 fused_ordering(307) 00:16:29.377 fused_ordering(308) 00:16:29.377 fused_ordering(309) 00:16:29.377 fused_ordering(310) 00:16:29.377 fused_ordering(311) 00:16:29.377 fused_ordering(312) 00:16:29.377 fused_ordering(313) 00:16:29.377 fused_ordering(314) 00:16:29.377 fused_ordering(315) 00:16:29.377 fused_ordering(316) 00:16:29.377 fused_ordering(317) 00:16:29.377 fused_ordering(318) 00:16:29.377 fused_ordering(319) 00:16:29.377 fused_ordering(320) 00:16:29.377 fused_ordering(321) 00:16:29.377 fused_ordering(322) 00:16:29.377 fused_ordering(323) 00:16:29.377 fused_ordering(324) 00:16:29.377 fused_ordering(325) 00:16:29.377 fused_ordering(326) 00:16:29.377 fused_ordering(327) 00:16:29.377 fused_ordering(328) 00:16:29.377 fused_ordering(329) 00:16:29.377 fused_ordering(330) 00:16:29.377 fused_ordering(331) 00:16:29.377 fused_ordering(332) 00:16:29.377 fused_ordering(333) 00:16:29.377 fused_ordering(334) 00:16:29.377 fused_ordering(335) 00:16:29.377 fused_ordering(336) 00:16:29.377 fused_ordering(337) 00:16:29.377 fused_ordering(338) 00:16:29.377 fused_ordering(339) 00:16:29.377 fused_ordering(340) 00:16:29.377 fused_ordering(341) 00:16:29.377 fused_ordering(342) 00:16:29.377 fused_ordering(343) 00:16:29.377 fused_ordering(344) 00:16:29.377 fused_ordering(345) 00:16:29.377 fused_ordering(346) 00:16:29.377 fused_ordering(347) 00:16:29.377 fused_ordering(348) 00:16:29.377 fused_ordering(349) 00:16:29.377 fused_ordering(350) 00:16:29.377 fused_ordering(351) 00:16:29.377 fused_ordering(352) 00:16:29.377 fused_ordering(353) 00:16:29.377 fused_ordering(354) 00:16:29.377 fused_ordering(355) 00:16:29.377 fused_ordering(356) 00:16:29.377 fused_ordering(357) 00:16:29.377 fused_ordering(358) 00:16:29.377 fused_ordering(359) 00:16:29.377 fused_ordering(360) 00:16:29.377 fused_ordering(361) 00:16:29.377 fused_ordering(362) 00:16:29.377 fused_ordering(363) 00:16:29.377 fused_ordering(364) 00:16:29.377 fused_ordering(365) 00:16:29.377 fused_ordering(366) 00:16:29.377 fused_ordering(367) 00:16:29.377 fused_ordering(368) 00:16:29.377 fused_ordering(369) 00:16:29.377 fused_ordering(370) 00:16:29.377 fused_ordering(371) 00:16:29.377 fused_ordering(372) 00:16:29.377 fused_ordering(373) 00:16:29.377 fused_ordering(374) 00:16:29.377 fused_ordering(375) 00:16:29.377 fused_ordering(376) 00:16:29.377 fused_ordering(377) 00:16:29.377 fused_ordering(378) 00:16:29.377 fused_ordering(379) 00:16:29.377 fused_ordering(380) 00:16:29.377 fused_ordering(381) 00:16:29.377 fused_ordering(382) 00:16:29.377 fused_ordering(383) 00:16:29.377 fused_ordering(384) 00:16:29.377 fused_ordering(385) 00:16:29.377 fused_ordering(386) 00:16:29.377 fused_ordering(387) 00:16:29.377 fused_ordering(388) 00:16:29.377 fused_ordering(389) 00:16:29.377 fused_ordering(390) 00:16:29.377 fused_ordering(391) 00:16:29.377 fused_ordering(392) 00:16:29.377 fused_ordering(393) 00:16:29.377 fused_ordering(394) 00:16:29.377 fused_ordering(395) 00:16:29.377 fused_ordering(396) 00:16:29.377 fused_ordering(397) 00:16:29.377 fused_ordering(398) 00:16:29.377 fused_ordering(399) 00:16:29.377 fused_ordering(400) 00:16:29.377 fused_ordering(401) 00:16:29.377 fused_ordering(402) 00:16:29.377 fused_ordering(403) 00:16:29.377 fused_ordering(404) 00:16:29.377 fused_ordering(405) 00:16:29.377 fused_ordering(406) 00:16:29.377 fused_ordering(407) 00:16:29.377 fused_ordering(408) 00:16:29.377 fused_ordering(409) 00:16:29.377 fused_ordering(410) 00:16:29.636 fused_ordering(411) 00:16:29.636 fused_ordering(412) 00:16:29.636 fused_ordering(413) 00:16:29.636 fused_ordering(414) 00:16:29.636 fused_ordering(415) 00:16:29.636 fused_ordering(416) 00:16:29.636 fused_ordering(417) 00:16:29.636 fused_ordering(418) 00:16:29.636 fused_ordering(419) 00:16:29.636 fused_ordering(420) 00:16:29.636 fused_ordering(421) 00:16:29.636 fused_ordering(422) 00:16:29.637 fused_ordering(423) 00:16:29.637 fused_ordering(424) 00:16:29.637 fused_ordering(425) 00:16:29.637 fused_ordering(426) 00:16:29.637 fused_ordering(427) 00:16:29.637 fused_ordering(428) 00:16:29.637 fused_ordering(429) 00:16:29.637 fused_ordering(430) 00:16:29.637 fused_ordering(431) 00:16:29.637 fused_ordering(432) 00:16:29.637 fused_ordering(433) 00:16:29.637 fused_ordering(434) 00:16:29.637 fused_ordering(435) 00:16:29.637 fused_ordering(436) 00:16:29.637 fused_ordering(437) 00:16:29.637 fused_ordering(438) 00:16:29.637 fused_ordering(439) 00:16:29.637 fused_ordering(440) 00:16:29.637 fused_ordering(441) 00:16:29.637 fused_ordering(442) 00:16:29.637 fused_ordering(443) 00:16:29.637 fused_ordering(444) 00:16:29.637 fused_ordering(445) 00:16:29.637 fused_ordering(446) 00:16:29.637 fused_ordering(447) 00:16:29.637 fused_ordering(448) 00:16:29.637 fused_ordering(449) 00:16:29.637 fused_ordering(450) 00:16:29.637 fused_ordering(451) 00:16:29.637 fused_ordering(452) 00:16:29.637 fused_ordering(453) 00:16:29.637 fused_ordering(454) 00:16:29.637 fused_ordering(455) 00:16:29.637 fused_ordering(456) 00:16:29.637 fused_ordering(457) 00:16:29.637 fused_ordering(458) 00:16:29.637 fused_ordering(459) 00:16:29.637 fused_ordering(460) 00:16:29.637 fused_ordering(461) 00:16:29.637 fused_ordering(462) 00:16:29.637 fused_ordering(463) 00:16:29.637 fused_ordering(464) 00:16:29.637 fused_ordering(465) 00:16:29.637 fused_ordering(466) 00:16:29.637 fused_ordering(467) 00:16:29.637 fused_ordering(468) 00:16:29.637 fused_ordering(469) 00:16:29.637 fused_ordering(470) 00:16:29.637 fused_ordering(471) 00:16:29.637 fused_ordering(472) 00:16:29.637 fused_ordering(473) 00:16:29.637 fused_ordering(474) 00:16:29.637 fused_ordering(475) 00:16:29.637 fused_ordering(476) 00:16:29.637 fused_ordering(477) 00:16:29.637 fused_ordering(478) 00:16:29.637 fused_ordering(479) 00:16:29.637 fused_ordering(480) 00:16:29.637 fused_ordering(481) 00:16:29.637 fused_ordering(482) 00:16:29.637 fused_ordering(483) 00:16:29.637 fused_ordering(484) 00:16:29.637 fused_ordering(485) 00:16:29.637 fused_ordering(486) 00:16:29.637 fused_ordering(487) 00:16:29.637 fused_ordering(488) 00:16:29.637 fused_ordering(489) 00:16:29.637 fused_ordering(490) 00:16:29.637 fused_ordering(491) 00:16:29.637 fused_ordering(492) 00:16:29.637 fused_ordering(493) 00:16:29.637 fused_ordering(494) 00:16:29.637 fused_ordering(495) 00:16:29.637 fused_ordering(496) 00:16:29.637 fused_ordering(497) 00:16:29.637 fused_ordering(498) 00:16:29.637 fused_ordering(499) 00:16:29.637 fused_ordering(500) 00:16:29.637 fused_ordering(501) 00:16:29.637 fused_ordering(502) 00:16:29.637 fused_ordering(503) 00:16:29.637 fused_ordering(504) 00:16:29.637 fused_ordering(505) 00:16:29.637 fused_ordering(506) 00:16:29.637 fused_ordering(507) 00:16:29.637 fused_ordering(508) 00:16:29.637 fused_ordering(509) 00:16:29.637 fused_ordering(510) 00:16:29.637 fused_ordering(511) 00:16:29.637 fused_ordering(512) 00:16:29.637 fused_ordering(513) 00:16:29.637 fused_ordering(514) 00:16:29.637 fused_ordering(515) 00:16:29.637 fused_ordering(516) 00:16:29.637 fused_ordering(517) 00:16:29.637 fused_ordering(518) 00:16:29.637 fused_ordering(519) 00:16:29.637 fused_ordering(520) 00:16:29.637 fused_ordering(521) 00:16:29.637 fused_ordering(522) 00:16:29.637 fused_ordering(523) 00:16:29.637 fused_ordering(524) 00:16:29.637 fused_ordering(525) 00:16:29.637 fused_ordering(526) 00:16:29.637 fused_ordering(527) 00:16:29.637 fused_ordering(528) 00:16:29.637 fused_ordering(529) 00:16:29.637 fused_ordering(530) 00:16:29.637 fused_ordering(531) 00:16:29.637 fused_ordering(532) 00:16:29.637 fused_ordering(533) 00:16:29.637 fused_ordering(534) 00:16:29.637 fused_ordering(535) 00:16:29.637 fused_ordering(536) 00:16:29.637 fused_ordering(537) 00:16:29.637 fused_ordering(538) 00:16:29.637 fused_ordering(539) 00:16:29.637 fused_ordering(540) 00:16:29.637 fused_ordering(541) 00:16:29.637 fused_ordering(542) 00:16:29.637 fused_ordering(543) 00:16:29.637 fused_ordering(544) 00:16:29.637 fused_ordering(545) 00:16:29.637 fused_ordering(546) 00:16:29.637 fused_ordering(547) 00:16:29.637 fused_ordering(548) 00:16:29.637 fused_ordering(549) 00:16:29.637 fused_ordering(550) 00:16:29.637 fused_ordering(551) 00:16:29.637 fused_ordering(552) 00:16:29.637 fused_ordering(553) 00:16:29.637 fused_ordering(554) 00:16:29.637 fused_ordering(555) 00:16:29.637 fused_ordering(556) 00:16:29.637 fused_ordering(557) 00:16:29.637 fused_ordering(558) 00:16:29.637 fused_ordering(559) 00:16:29.637 fused_ordering(560) 00:16:29.637 fused_ordering(561) 00:16:29.637 fused_ordering(562) 00:16:29.637 fused_ordering(563) 00:16:29.637 fused_ordering(564) 00:16:29.637 fused_ordering(565) 00:16:29.637 fused_ordering(566) 00:16:29.637 fused_ordering(567) 00:16:29.637 fused_ordering(568) 00:16:29.637 fused_ordering(569) 00:16:29.637 fused_ordering(570) 00:16:29.637 fused_ordering(571) 00:16:29.637 fused_ordering(572) 00:16:29.637 fused_ordering(573) 00:16:29.637 fused_ordering(574) 00:16:29.637 fused_ordering(575) 00:16:29.637 fused_ordering(576) 00:16:29.637 fused_ordering(577) 00:16:29.637 fused_ordering(578) 00:16:29.637 fused_ordering(579) 00:16:29.637 fused_ordering(580) 00:16:29.637 fused_ordering(581) 00:16:29.637 fused_ordering(582) 00:16:29.637 fused_ordering(583) 00:16:29.637 fused_ordering(584) 00:16:29.637 fused_ordering(585) 00:16:29.637 fused_ordering(586) 00:16:29.637 fused_ordering(587) 00:16:29.637 fused_ordering(588) 00:16:29.637 fused_ordering(589) 00:16:29.637 fused_ordering(590) 00:16:29.637 fused_ordering(591) 00:16:29.637 fused_ordering(592) 00:16:29.637 fused_ordering(593) 00:16:29.637 fused_ordering(594) 00:16:29.637 fused_ordering(595) 00:16:29.637 fused_ordering(596) 00:16:29.637 fused_ordering(597) 00:16:29.637 fused_ordering(598) 00:16:29.637 fused_ordering(599) 00:16:29.637 fused_ordering(600) 00:16:29.637 fused_ordering(601) 00:16:29.637 fused_ordering(602) 00:16:29.637 fused_ordering(603) 00:16:29.637 fused_ordering(604) 00:16:29.637 fused_ordering(605) 00:16:29.637 fused_ordering(606) 00:16:29.637 fused_ordering(607) 00:16:29.637 fused_ordering(608) 00:16:29.637 fused_ordering(609) 00:16:29.637 fused_ordering(610) 00:16:29.637 fused_ordering(611) 00:16:29.637 fused_ordering(612) 00:16:29.637 fused_ordering(613) 00:16:29.637 fused_ordering(614) 00:16:29.637 fused_ordering(615) 00:16:30.206 fused_ordering(616) 00:16:30.206 fused_ordering(617) 00:16:30.206 fused_ordering(618) 00:16:30.206 fused_ordering(619) 00:16:30.206 fused_ordering(620) 00:16:30.206 fused_ordering(621) 00:16:30.206 fused_ordering(622) 00:16:30.206 fused_ordering(623) 00:16:30.206 fused_ordering(624) 00:16:30.206 fused_ordering(625) 00:16:30.206 fused_ordering(626) 00:16:30.206 fused_ordering(627) 00:16:30.206 fused_ordering(628) 00:16:30.206 fused_ordering(629) 00:16:30.206 fused_ordering(630) 00:16:30.206 fused_ordering(631) 00:16:30.206 fused_ordering(632) 00:16:30.206 fused_ordering(633) 00:16:30.206 fused_ordering(634) 00:16:30.206 fused_ordering(635) 00:16:30.206 fused_ordering(636) 00:16:30.206 fused_ordering(637) 00:16:30.206 fused_ordering(638) 00:16:30.206 fused_ordering(639) 00:16:30.206 fused_ordering(640) 00:16:30.206 fused_ordering(641) 00:16:30.206 fused_ordering(642) 00:16:30.206 fused_ordering(643) 00:16:30.206 fused_ordering(644) 00:16:30.206 fused_ordering(645) 00:16:30.206 fused_ordering(646) 00:16:30.206 fused_ordering(647) 00:16:30.206 fused_ordering(648) 00:16:30.206 fused_ordering(649) 00:16:30.206 fused_ordering(650) 00:16:30.206 fused_ordering(651) 00:16:30.206 fused_ordering(652) 00:16:30.206 fused_ordering(653) 00:16:30.206 fused_ordering(654) 00:16:30.206 fused_ordering(655) 00:16:30.206 fused_ordering(656) 00:16:30.206 fused_ordering(657) 00:16:30.206 fused_ordering(658) 00:16:30.206 fused_ordering(659) 00:16:30.206 fused_ordering(660) 00:16:30.206 fused_ordering(661) 00:16:30.206 fused_ordering(662) 00:16:30.206 fused_ordering(663) 00:16:30.206 fused_ordering(664) 00:16:30.206 fused_ordering(665) 00:16:30.206 fused_ordering(666) 00:16:30.206 fused_ordering(667) 00:16:30.206 fused_ordering(668) 00:16:30.206 fused_ordering(669) 00:16:30.206 fused_ordering(670) 00:16:30.206 fused_ordering(671) 00:16:30.206 fused_ordering(672) 00:16:30.206 fused_ordering(673) 00:16:30.206 fused_ordering(674) 00:16:30.206 fused_ordering(675) 00:16:30.206 fused_ordering(676) 00:16:30.206 fused_ordering(677) 00:16:30.206 fused_ordering(678) 00:16:30.206 fused_ordering(679) 00:16:30.206 fused_ordering(680) 00:16:30.206 fused_ordering(681) 00:16:30.206 fused_ordering(682) 00:16:30.206 fused_ordering(683) 00:16:30.206 fused_ordering(684) 00:16:30.206 fused_ordering(685) 00:16:30.206 fused_ordering(686) 00:16:30.206 fused_ordering(687) 00:16:30.206 fused_ordering(688) 00:16:30.206 fused_ordering(689) 00:16:30.206 fused_ordering(690) 00:16:30.206 fused_ordering(691) 00:16:30.206 fused_ordering(692) 00:16:30.206 fused_ordering(693) 00:16:30.206 fused_ordering(694) 00:16:30.206 fused_ordering(695) 00:16:30.206 fused_ordering(696) 00:16:30.206 fused_ordering(697) 00:16:30.206 fused_ordering(698) 00:16:30.206 fused_ordering(699) 00:16:30.206 fused_ordering(700) 00:16:30.206 fused_ordering(701) 00:16:30.206 fused_ordering(702) 00:16:30.206 fused_ordering(703) 00:16:30.206 fused_ordering(704) 00:16:30.206 fused_ordering(705) 00:16:30.206 fused_ordering(706) 00:16:30.206 fused_ordering(707) 00:16:30.206 fused_ordering(708) 00:16:30.206 fused_ordering(709) 00:16:30.206 fused_ordering(710) 00:16:30.206 fused_ordering(711) 00:16:30.206 fused_ordering(712) 00:16:30.206 fused_ordering(713) 00:16:30.206 fused_ordering(714) 00:16:30.206 fused_ordering(715) 00:16:30.206 fused_ordering(716) 00:16:30.206 fused_ordering(717) 00:16:30.206 fused_ordering(718) 00:16:30.206 fused_ordering(719) 00:16:30.206 fused_ordering(720) 00:16:30.206 fused_ordering(721) 00:16:30.206 fused_ordering(722) 00:16:30.206 fused_ordering(723) 00:16:30.206 fused_ordering(724) 00:16:30.206 fused_ordering(725) 00:16:30.206 fused_ordering(726) 00:16:30.206 fused_ordering(727) 00:16:30.206 fused_ordering(728) 00:16:30.206 fused_ordering(729) 00:16:30.206 fused_ordering(730) 00:16:30.206 fused_ordering(731) 00:16:30.206 fused_ordering(732) 00:16:30.206 fused_ordering(733) 00:16:30.206 fused_ordering(734) 00:16:30.206 fused_ordering(735) 00:16:30.206 fused_ordering(736) 00:16:30.206 fused_ordering(737) 00:16:30.206 fused_ordering(738) 00:16:30.206 fused_ordering(739) 00:16:30.206 fused_ordering(740) 00:16:30.206 fused_ordering(741) 00:16:30.206 fused_ordering(742) 00:16:30.206 fused_ordering(743) 00:16:30.206 fused_ordering(744) 00:16:30.206 fused_ordering(745) 00:16:30.206 fused_ordering(746) 00:16:30.206 fused_ordering(747) 00:16:30.206 fused_ordering(748) 00:16:30.206 fused_ordering(749) 00:16:30.206 fused_ordering(750) 00:16:30.206 fused_ordering(751) 00:16:30.206 fused_ordering(752) 00:16:30.206 fused_ordering(753) 00:16:30.206 fused_ordering(754) 00:16:30.206 fused_ordering(755) 00:16:30.207 fused_ordering(756) 00:16:30.207 fused_ordering(757) 00:16:30.207 fused_ordering(758) 00:16:30.207 fused_ordering(759) 00:16:30.207 fused_ordering(760) 00:16:30.207 fused_ordering(761) 00:16:30.207 fused_ordering(762) 00:16:30.207 fused_ordering(763) 00:16:30.207 fused_ordering(764) 00:16:30.207 fused_ordering(765) 00:16:30.207 fused_ordering(766) 00:16:30.207 fused_ordering(767) 00:16:30.207 fused_ordering(768) 00:16:30.207 fused_ordering(769) 00:16:30.207 fused_ordering(770) 00:16:30.207 fused_ordering(771) 00:16:30.207 fused_ordering(772) 00:16:30.207 fused_ordering(773) 00:16:30.207 fused_ordering(774) 00:16:30.207 fused_ordering(775) 00:16:30.207 fused_ordering(776) 00:16:30.207 fused_ordering(777) 00:16:30.207 fused_ordering(778) 00:16:30.207 fused_ordering(779) 00:16:30.207 fused_ordering(780) 00:16:30.207 fused_ordering(781) 00:16:30.207 fused_ordering(782) 00:16:30.207 fused_ordering(783) 00:16:30.207 fused_ordering(784) 00:16:30.207 fused_ordering(785) 00:16:30.207 fused_ordering(786) 00:16:30.207 fused_ordering(787) 00:16:30.207 fused_ordering(788) 00:16:30.207 fused_ordering(789) 00:16:30.207 fused_ordering(790) 00:16:30.207 fused_ordering(791) 00:16:30.207 fused_ordering(792) 00:16:30.207 fused_ordering(793) 00:16:30.207 fused_ordering(794) 00:16:30.207 fused_ordering(795) 00:16:30.207 fused_ordering(796) 00:16:30.207 fused_ordering(797) 00:16:30.207 fused_ordering(798) 00:16:30.207 fused_ordering(799) 00:16:30.207 fused_ordering(800) 00:16:30.207 fused_ordering(801) 00:16:30.207 fused_ordering(802) 00:16:30.207 fused_ordering(803) 00:16:30.207 fused_ordering(804) 00:16:30.207 fused_ordering(805) 00:16:30.207 fused_ordering(806) 00:16:30.207 fused_ordering(807) 00:16:30.207 fused_ordering(808) 00:16:30.207 fused_ordering(809) 00:16:30.207 fused_ordering(810) 00:16:30.207 fused_ordering(811) 00:16:30.207 fused_ordering(812) 00:16:30.207 fused_ordering(813) 00:16:30.207 fused_ordering(814) 00:16:30.207 fused_ordering(815) 00:16:30.207 fused_ordering(816) 00:16:30.207 fused_ordering(817) 00:16:30.207 fused_ordering(818) 00:16:30.207 fused_ordering(819) 00:16:30.207 fused_ordering(820) 00:16:30.775 fused_ordering(821) 00:16:30.775 fused_ordering(822) 00:16:30.775 fused_ordering(823) 00:16:30.775 fused_ordering(824) 00:16:30.775 fused_ordering(825) 00:16:30.775 fused_ordering(826) 00:16:30.775 fused_ordering(827) 00:16:30.775 fused_ordering(828) 00:16:30.775 fused_ordering(829) 00:16:30.775 fused_ordering(830) 00:16:30.775 fused_ordering(831) 00:16:30.775 fused_ordering(832) 00:16:30.775 fused_ordering(833) 00:16:30.776 fused_ordering(834) 00:16:30.776 fused_ordering(835) 00:16:30.776 fused_ordering(836) 00:16:30.776 fused_ordering(837) 00:16:30.776 fused_ordering(838) 00:16:30.776 fused_ordering(839) 00:16:30.776 fused_ordering(840) 00:16:30.776 fused_ordering(841) 00:16:30.776 fused_ordering(842) 00:16:30.776 fused_ordering(843) 00:16:30.776 fused_ordering(844) 00:16:30.776 fused_ordering(845) 00:16:30.776 fused_ordering(846) 00:16:30.776 fused_ordering(847) 00:16:30.776 fused_ordering(848) 00:16:30.776 fused_ordering(849) 00:16:30.776 fused_ordering(850) 00:16:30.776 fused_ordering(851) 00:16:30.776 fused_ordering(852) 00:16:30.776 fused_ordering(853) 00:16:30.776 fused_ordering(854) 00:16:30.776 fused_ordering(855) 00:16:30.776 fused_ordering(856) 00:16:30.776 fused_ordering(857) 00:16:30.776 fused_ordering(858) 00:16:30.776 fused_ordering(859) 00:16:30.776 fused_ordering(860) 00:16:30.776 fused_ordering(861) 00:16:30.776 fused_ordering(862) 00:16:30.776 fused_ordering(863) 00:16:30.776 fused_ordering(864) 00:16:30.776 fused_ordering(865) 00:16:30.776 fused_ordering(866) 00:16:30.776 fused_ordering(867) 00:16:30.776 fused_ordering(868) 00:16:30.776 fused_ordering(869) 00:16:30.776 fused_ordering(870) 00:16:30.776 fused_ordering(871) 00:16:30.776 fused_ordering(872) 00:16:30.776 fused_ordering(873) 00:16:30.776 fused_ordering(874) 00:16:30.776 fused_ordering(875) 00:16:30.776 fused_ordering(876) 00:16:30.776 fused_ordering(877) 00:16:30.776 fused_ordering(878) 00:16:30.776 fused_ordering(879) 00:16:30.776 fused_ordering(880) 00:16:30.776 fused_ordering(881) 00:16:30.776 fused_ordering(882) 00:16:30.776 fused_ordering(883) 00:16:30.776 fused_ordering(884) 00:16:30.776 fused_ordering(885) 00:16:30.776 fused_ordering(886) 00:16:30.776 fused_ordering(887) 00:16:30.776 fused_ordering(888) 00:16:30.776 fused_ordering(889) 00:16:30.776 fused_ordering(890) 00:16:30.776 fused_ordering(891) 00:16:30.776 fused_ordering(892) 00:16:30.776 fused_ordering(893) 00:16:30.776 fused_ordering(894) 00:16:30.776 fused_ordering(895) 00:16:30.776 fused_ordering(896) 00:16:30.776 fused_ordering(897) 00:16:30.776 fused_ordering(898) 00:16:30.776 fused_ordering(899) 00:16:30.776 fused_ordering(900) 00:16:30.776 fused_ordering(901) 00:16:30.776 fused_ordering(902) 00:16:30.776 fused_ordering(903) 00:16:30.776 fused_ordering(904) 00:16:30.776 fused_ordering(905) 00:16:30.776 fused_ordering(906) 00:16:30.776 fused_ordering(907) 00:16:30.776 fused_ordering(908) 00:16:30.776 fused_ordering(909) 00:16:30.776 fused_ordering(910) 00:16:30.776 fused_ordering(911) 00:16:30.776 fused_ordering(912) 00:16:30.776 fused_ordering(913) 00:16:30.776 fused_ordering(914) 00:16:30.776 fused_ordering(915) 00:16:30.776 fused_ordering(916) 00:16:30.776 fused_ordering(917) 00:16:30.776 fused_ordering(918) 00:16:30.776 fused_ordering(919) 00:16:30.776 fused_ordering(920) 00:16:30.776 fused_ordering(921) 00:16:30.776 fused_ordering(922) 00:16:30.776 fused_ordering(923) 00:16:30.776 fused_ordering(924) 00:16:30.776 fused_ordering(925) 00:16:30.776 fused_ordering(926) 00:16:30.776 fused_ordering(927) 00:16:30.776 fused_ordering(928) 00:16:30.776 fused_ordering(929) 00:16:30.776 fused_ordering(930) 00:16:30.776 fused_ordering(931) 00:16:30.776 fused_ordering(932) 00:16:30.776 fused_ordering(933) 00:16:30.776 fused_ordering(934) 00:16:30.776 fused_ordering(935) 00:16:30.776 fused_ordering(936) 00:16:30.776 fused_ordering(937) 00:16:30.776 fused_ordering(938) 00:16:30.776 fused_ordering(939) 00:16:30.776 fused_ordering(940) 00:16:30.776 fused_ordering(941) 00:16:30.776 fused_ordering(942) 00:16:30.776 fused_ordering(943) 00:16:30.776 fused_ordering(944) 00:16:30.776 fused_ordering(945) 00:16:30.776 fused_ordering(946) 00:16:30.776 fused_ordering(947) 00:16:30.776 fused_ordering(948) 00:16:30.776 fused_ordering(949) 00:16:30.776 fused_ordering(950) 00:16:30.776 fused_ordering(951) 00:16:30.776 fused_ordering(952) 00:16:30.776 fused_ordering(953) 00:16:30.776 fused_ordering(954) 00:16:30.776 fused_ordering(955) 00:16:30.776 fused_ordering(956) 00:16:30.776 fused_ordering(957) 00:16:30.776 fused_ordering(958) 00:16:30.776 fused_ordering(959) 00:16:30.776 fused_ordering(960) 00:16:30.776 fused_ordering(961) 00:16:30.776 fused_ordering(962) 00:16:30.776 fused_ordering(963) 00:16:30.776 fused_ordering(964) 00:16:30.776 fused_ordering(965) 00:16:30.776 fused_ordering(966) 00:16:30.776 fused_ordering(967) 00:16:30.776 fused_ordering(968) 00:16:30.776 fused_ordering(969) 00:16:30.776 fused_ordering(970) 00:16:30.776 fused_ordering(971) 00:16:30.776 fused_ordering(972) 00:16:30.776 fused_ordering(973) 00:16:30.776 fused_ordering(974) 00:16:30.776 fused_ordering(975) 00:16:30.776 fused_ordering(976) 00:16:30.776 fused_ordering(977) 00:16:30.776 fused_ordering(978) 00:16:30.776 fused_ordering(979) 00:16:30.776 fused_ordering(980) 00:16:30.776 fused_ordering(981) 00:16:30.776 fused_ordering(982) 00:16:30.776 fused_ordering(983) 00:16:30.776 fused_ordering(984) 00:16:30.776 fused_ordering(985) 00:16:30.776 fused_ordering(986) 00:16:30.776 fused_ordering(987) 00:16:30.776 fused_ordering(988) 00:16:30.776 fused_ordering(989) 00:16:30.776 fused_ordering(990) 00:16:30.776 fused_ordering(991) 00:16:30.776 fused_ordering(992) 00:16:30.776 fused_ordering(993) 00:16:30.776 fused_ordering(994) 00:16:30.776 fused_ordering(995) 00:16:30.776 fused_ordering(996) 00:16:30.776 fused_ordering(997) 00:16:30.776 fused_ordering(998) 00:16:30.776 fused_ordering(999) 00:16:30.776 fused_ordering(1000) 00:16:30.776 fused_ordering(1001) 00:16:30.776 fused_ordering(1002) 00:16:30.776 fused_ordering(1003) 00:16:30.776 fused_ordering(1004) 00:16:30.776 fused_ordering(1005) 00:16:30.776 fused_ordering(1006) 00:16:30.776 fused_ordering(1007) 00:16:30.776 fused_ordering(1008) 00:16:30.776 fused_ordering(1009) 00:16:30.776 fused_ordering(1010) 00:16:30.776 fused_ordering(1011) 00:16:30.776 fused_ordering(1012) 00:16:30.776 fused_ordering(1013) 00:16:30.776 fused_ordering(1014) 00:16:30.776 fused_ordering(1015) 00:16:30.776 fused_ordering(1016) 00:16:30.776 fused_ordering(1017) 00:16:30.776 fused_ordering(1018) 00:16:30.776 fused_ordering(1019) 00:16:30.776 fused_ordering(1020) 00:16:30.776 fused_ordering(1021) 00:16:30.776 fused_ordering(1022) 00:16:30.776 fused_ordering(1023) 00:16:30.776 14:30:37 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:30.776 14:30:37 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:30.776 14:30:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:30.776 14:30:37 -- nvmf/common.sh@116 -- # sync 00:16:30.776 14:30:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:30.776 14:30:37 -- nvmf/common.sh@119 -- # set +e 00:16:30.776 14:30:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:30.776 14:30:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:30.776 rmmod nvme_tcp 00:16:30.776 rmmod nvme_fabrics 00:16:30.776 rmmod nvme_keyring 00:16:30.776 14:30:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:30.776 14:30:37 -- nvmf/common.sh@123 -- # set -e 00:16:30.776 14:30:37 -- nvmf/common.sh@124 -- # return 0 00:16:30.776 14:30:37 -- nvmf/common.sh@477 -- # '[' -n 70663 ']' 00:16:30.776 14:30:37 -- nvmf/common.sh@478 -- # killprocess 70663 00:16:30.776 14:30:37 -- common/autotest_common.sh@936 -- # '[' -z 70663 ']' 00:16:30.776 14:30:37 -- common/autotest_common.sh@940 -- # kill -0 70663 00:16:30.776 14:30:37 -- common/autotest_common.sh@941 -- # uname 00:16:30.776 14:30:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.776 14:30:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70663 00:16:30.776 14:30:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:30.776 killing process with pid 70663 00:16:30.776 14:30:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:30.776 14:30:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70663' 00:16:30.776 14:30:37 -- common/autotest_common.sh@955 -- # kill 70663 00:16:30.776 14:30:37 -- common/autotest_common.sh@960 -- # wait 70663 00:16:31.035 14:30:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:31.035 14:30:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:31.035 14:30:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:31.035 14:30:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.035 14:30:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:31.035 14:30:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.035 14:30:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.035 14:30:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.035 14:30:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:31.035 ************************************ 00:16:31.035 END TEST nvmf_fused_ordering 00:16:31.035 ************************************ 00:16:31.035 00:16:31.035 real 0m4.393s 00:16:31.035 user 0m5.212s 00:16:31.035 sys 0m1.432s 00:16:31.035 14:30:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:31.035 14:30:37 -- common/autotest_common.sh@10 -- # set +x 00:16:31.035 14:30:37 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:31.035 14:30:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:31.035 14:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.035 14:30:37 -- common/autotest_common.sh@10 -- # set +x 00:16:31.035 ************************************ 00:16:31.035 START TEST nvmf_delete_subsystem 00:16:31.035 ************************************ 00:16:31.035 14:30:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:31.295 * Looking for test storage... 00:16:31.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:31.295 14:30:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:31.295 14:30:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:31.295 14:30:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:31.295 14:30:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:31.295 14:30:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:31.295 14:30:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:31.295 14:30:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:31.295 14:30:38 -- scripts/common.sh@335 -- # IFS=.-: 00:16:31.295 14:30:38 -- scripts/common.sh@335 -- # read -ra ver1 00:16:31.295 14:30:38 -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.295 14:30:38 -- scripts/common.sh@336 -- # read -ra ver2 00:16:31.295 14:30:38 -- scripts/common.sh@337 -- # local 'op=<' 00:16:31.295 14:30:38 -- scripts/common.sh@339 -- # ver1_l=2 00:16:31.295 14:30:38 -- scripts/common.sh@340 -- # ver2_l=1 00:16:31.295 14:30:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:31.295 14:30:38 -- scripts/common.sh@343 -- # case "$op" in 00:16:31.295 14:30:38 -- scripts/common.sh@344 -- # : 1 00:16:31.295 14:30:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:31.295 14:30:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.295 14:30:38 -- scripts/common.sh@364 -- # decimal 1 00:16:31.295 14:30:38 -- scripts/common.sh@352 -- # local d=1 00:16:31.295 14:30:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.295 14:30:38 -- scripts/common.sh@354 -- # echo 1 00:16:31.295 14:30:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:31.295 14:30:38 -- scripts/common.sh@365 -- # decimal 2 00:16:31.295 14:30:38 -- scripts/common.sh@352 -- # local d=2 00:16:31.295 14:30:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.295 14:30:38 -- scripts/common.sh@354 -- # echo 2 00:16:31.295 14:30:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:31.295 14:30:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:31.295 14:30:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:31.295 14:30:38 -- scripts/common.sh@367 -- # return 0 00:16:31.295 14:30:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.295 14:30:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.295 --rc genhtml_branch_coverage=1 00:16:31.295 --rc genhtml_function_coverage=1 00:16:31.295 --rc genhtml_legend=1 00:16:31.295 --rc geninfo_all_blocks=1 00:16:31.295 --rc geninfo_unexecuted_blocks=1 00:16:31.295 00:16:31.295 ' 00:16:31.295 14:30:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.295 --rc genhtml_branch_coverage=1 00:16:31.295 --rc genhtml_function_coverage=1 00:16:31.295 --rc genhtml_legend=1 00:16:31.295 --rc geninfo_all_blocks=1 00:16:31.295 --rc geninfo_unexecuted_blocks=1 00:16:31.295 00:16:31.295 ' 00:16:31.295 14:30:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.295 --rc genhtml_branch_coverage=1 00:16:31.295 --rc genhtml_function_coverage=1 00:16:31.295 --rc genhtml_legend=1 00:16:31.295 --rc geninfo_all_blocks=1 00:16:31.295 --rc geninfo_unexecuted_blocks=1 00:16:31.295 00:16:31.295 ' 00:16:31.295 14:30:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.295 --rc genhtml_branch_coverage=1 00:16:31.295 --rc genhtml_function_coverage=1 00:16:31.295 --rc genhtml_legend=1 00:16:31.295 --rc geninfo_all_blocks=1 00:16:31.295 --rc geninfo_unexecuted_blocks=1 00:16:31.295 00:16:31.295 ' 00:16:31.295 14:30:38 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:31.295 14:30:38 -- nvmf/common.sh@7 -- # uname -s 00:16:31.295 14:30:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.295 14:30:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.295 14:30:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.295 14:30:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.295 14:30:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.295 14:30:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.295 14:30:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.295 14:30:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.295 14:30:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.295 14:30:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.295 14:30:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:31.295 14:30:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:31.295 14:30:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.295 14:30:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.295 14:30:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:31.295 14:30:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.295 14:30:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.295 14:30:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.295 14:30:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.295 14:30:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.295 14:30:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.295 14:30:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.295 14:30:38 -- paths/export.sh@5 -- # export PATH 00:16:31.295 14:30:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.295 14:30:38 -- nvmf/common.sh@46 -- # : 0 00:16:31.295 14:30:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:31.295 14:30:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:31.295 14:30:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:31.295 14:30:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.295 14:30:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.295 14:30:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:31.295 14:30:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:31.295 14:30:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:31.295 14:30:38 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:31.295 14:30:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:31.295 14:30:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.295 14:30:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:31.295 14:30:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:31.295 14:30:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:31.295 14:30:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.295 14:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.295 14:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.295 14:30:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:31.295 14:30:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:31.295 14:30:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:31.295 14:30:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:31.295 14:30:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:31.295 14:30:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:31.295 14:30:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.295 14:30:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.295 14:30:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:31.295 14:30:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:31.295 14:30:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:31.295 14:30:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:31.295 14:30:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:31.295 14:30:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.295 14:30:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:31.295 14:30:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:31.295 14:30:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:31.295 14:30:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:31.295 14:30:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:31.295 14:30:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:31.295 Cannot find device "nvmf_tgt_br" 00:16:31.295 14:30:38 -- nvmf/common.sh@154 -- # true 00:16:31.295 14:30:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.295 Cannot find device "nvmf_tgt_br2" 00:16:31.295 14:30:38 -- nvmf/common.sh@155 -- # true 00:16:31.295 14:30:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:31.295 14:30:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:31.295 Cannot find device "nvmf_tgt_br" 00:16:31.295 14:30:38 -- nvmf/common.sh@157 -- # true 00:16:31.295 14:30:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:31.295 Cannot find device "nvmf_tgt_br2" 00:16:31.295 14:30:38 -- nvmf/common.sh@158 -- # true 00:16:31.295 14:30:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:31.554 14:30:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:31.554 14:30:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.554 14:30:38 -- nvmf/common.sh@161 -- # true 00:16:31.554 14:30:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.554 14:30:38 -- nvmf/common.sh@162 -- # true 00:16:31.554 14:30:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.554 14:30:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.554 14:30:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.554 14:30:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.554 14:30:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.554 14:30:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.554 14:30:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.554 14:30:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:31.554 14:30:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:31.554 14:30:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:31.554 14:30:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:31.554 14:30:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:31.554 14:30:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:31.554 14:30:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.554 14:30:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.554 14:30:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.554 14:30:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:31.554 14:30:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:31.554 14:30:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.554 14:30:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.554 14:30:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.554 14:30:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.554 14:30:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.554 14:30:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:31.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:31.554 00:16:31.554 --- 10.0.0.2 ping statistics --- 00:16:31.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.554 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:31.554 14:30:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:31.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:31.813 00:16:31.813 --- 10.0.0.3 ping statistics --- 00:16:31.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.813 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:31.813 14:30:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:31.813 00:16:31.813 --- 10.0.0.1 ping statistics --- 00:16:31.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.813 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:31.813 14:30:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.813 14:30:38 -- nvmf/common.sh@421 -- # return 0 00:16:31.813 14:30:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.813 14:30:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.813 14:30:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.813 14:30:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.813 14:30:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.813 14:30:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.813 14:30:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.813 14:30:38 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:31.813 14:30:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.813 14:30:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:31.813 14:30:38 -- common/autotest_common.sh@10 -- # set +x 00:16:31.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.813 14:30:38 -- nvmf/common.sh@469 -- # nvmfpid=70931 00:16:31.813 14:30:38 -- nvmf/common.sh@470 -- # waitforlisten 70931 00:16:31.813 14:30:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:31.813 14:30:38 -- common/autotest_common.sh@829 -- # '[' -z 70931 ']' 00:16:31.813 14:30:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.813 14:30:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.813 14:30:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.813 14:30:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.813 14:30:38 -- common/autotest_common.sh@10 -- # set +x 00:16:31.813 [2024-12-06 14:30:38.637700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:31.813 [2024-12-06 14:30:38.638186] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.813 [2024-12-06 14:30:38.779884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.071 [2024-12-06 14:30:38.910576] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:32.071 [2024-12-06 14:30:38.910750] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.071 [2024-12-06 14:30:38.910767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.071 [2024-12-06 14:30:38.910778] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.071 [2024-12-06 14:30:38.910945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.071 [2024-12-06 14:30:38.911057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.005 14:30:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.005 14:30:39 -- common/autotest_common.sh@862 -- # return 0 00:16:33.005 14:30:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:33.005 14:30:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 14:30:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.005 14:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 [2024-12-06 14:30:39.737458] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.005 14:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:33.005 14:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 14:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.005 14:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 [2024-12-06 14:30:39.757608] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.005 14:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:33.005 14:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 NULL1 00:16:33.005 14:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:33.005 14:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 Delay0 00:16:33.005 14:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.005 14:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.005 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.005 14:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@28 -- # perf_pid=70988 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:33.005 14:30:39 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:33.005 [2024-12-06 14:30:39.959575] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:34.914 14:30:41 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.914 14:30:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.914 14:30:41 -- common/autotest_common.sh@10 -- # set +x 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 starting I/O failed: -6 00:16:35.189 Write completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.189 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 [2024-12-06 14:30:41.993657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8d30 is same with the state(5) to be set 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 [2024-12-06 14:30:41.995271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc87d0 is same with the state(5) to be set 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 [2024-12-06 14:30:41.995588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc9950 is same with the state(5) to be set 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Write completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 Read completed with error (sct=0, sc=8) 00:16:35.190 starting I/O failed: -6 00:16:35.190 [2024-12-06 14:30:41.996925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d5800c350 is same with the state(5) to be set 00:16:36.123 [2024-12-06 14:30:42.974300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bca5a0 is same with the state(5) to be set 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 [2024-12-06 14:30:42.994784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8a80 is same with the state(5) to be set 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Write completed with error (sct=0, sc=8) 00:16:36.123 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 [2024-12-06 14:30:42.997261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d5800bf20 is same with the state(5) to be set 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 [2024-12-06 14:30:42.997498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d58000c00 is same with the state(5) to be set 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Write completed with error (sct=0, sc=8) 00:16:36.124 Read completed with error (sct=0, sc=8) 00:16:36.124 [2024-12-06 14:30:42.997701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d5800c600 is same with the state(5) to be set 00:16:36.124 [2024-12-06 14:30:42.998833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bca5a0 (9): Bad file descriptor 00:16:36.124 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:36.124 14:30:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.124 14:30:43 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:36.124 14:30:43 -- target/delete_subsystem.sh@35 -- # kill -0 70988 00:16:36.124 14:30:43 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:36.124 Initializing NVMe Controllers 00:16:36.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:36.124 Controller IO queue size 128, less than required. 00:16:36.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:36.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:36.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:36.124 Initialization complete. Launching workers. 00:16:36.124 ======================================================== 00:16:36.124 Latency(us) 00:16:36.124 Device Information : IOPS MiB/s Average min max 00:16:36.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.06 0.08 883039.27 1627.57 1011439.51 00:16:36.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.00 0.08 1014912.33 496.53 2001801.11 00:16:36.124 ======================================================== 00:16:36.124 Total : 319.06 0.16 951235.31 496.53 2001801.11 00:16:36.124 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@35 -- # kill -0 70988 00:16:36.690 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70988) - No such process 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@45 -- # NOT wait 70988 00:16:36.690 14:30:43 -- common/autotest_common.sh@650 -- # local es=0 00:16:36.690 14:30:43 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 70988 00:16:36.690 14:30:43 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:36.690 14:30:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.690 14:30:43 -- common/autotest_common.sh@642 -- # type -t wait 00:16:36.690 14:30:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.690 14:30:43 -- common/autotest_common.sh@653 -- # wait 70988 00:16:36.690 14:30:43 -- common/autotest_common.sh@653 -- # es=1 00:16:36.690 14:30:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:36.690 14:30:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:36.690 14:30:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:36.690 14:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.690 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:36.690 14:30:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.690 14:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.690 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:36.690 [2024-12-06 14:30:43.524576] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.690 14:30:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.690 14:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.690 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:36.690 14:30:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@54 -- # perf_pid=71028 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:36.690 14:30:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:36.948 [2024-12-06 14:30:43.693981] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:37.207 14:30:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:37.207 14:30:44 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:37.207 14:30:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:37.775 14:30:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:37.775 14:30:44 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:37.775 14:30:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:38.342 14:30:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:38.342 14:30:45 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:38.342 14:30:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:38.600 14:30:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:38.600 14:30:45 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:38.600 14:30:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.166 14:30:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:39.166 14:30:46 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:39.166 14:30:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.733 14:30:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:39.733 14:30:46 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:39.733 14:30:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.991 Initializing NVMe Controllers 00:16:39.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:39.991 Controller IO queue size 128, less than required. 00:16:39.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:39.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:39.991 Initialization complete. Launching workers. 00:16:39.991 ======================================================== 00:16:39.991 Latency(us) 00:16:39.991 Device Information : IOPS MiB/s Average min max 00:16:39.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004679.78 1000191.67 1041036.99 00:16:39.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005624.54 1000162.49 1016456.73 00:16:39.991 ======================================================== 00:16:39.991 Total : 256.00 0.12 1005152.16 1000162.49 1041036.99 00:16:39.991 00:16:40.250 14:30:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:40.250 14:30:47 -- target/delete_subsystem.sh@57 -- # kill -0 71028 00:16:40.250 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71028) - No such process 00:16:40.250 14:30:47 -- target/delete_subsystem.sh@67 -- # wait 71028 00:16:40.250 14:30:47 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:40.250 14:30:47 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:40.250 14:30:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.250 14:30:47 -- nvmf/common.sh@116 -- # sync 00:16:40.250 14:30:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.250 14:30:47 -- nvmf/common.sh@119 -- # set +e 00:16:40.250 14:30:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.250 14:30:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.250 rmmod nvme_tcp 00:16:40.250 rmmod nvme_fabrics 00:16:40.250 rmmod nvme_keyring 00:16:40.250 14:30:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.250 14:30:47 -- nvmf/common.sh@123 -- # set -e 00:16:40.250 14:30:47 -- nvmf/common.sh@124 -- # return 0 00:16:40.250 14:30:47 -- nvmf/common.sh@477 -- # '[' -n 70931 ']' 00:16:40.250 14:30:47 -- nvmf/common.sh@478 -- # killprocess 70931 00:16:40.250 14:30:47 -- common/autotest_common.sh@936 -- # '[' -z 70931 ']' 00:16:40.250 14:30:47 -- common/autotest_common.sh@940 -- # kill -0 70931 00:16:40.250 14:30:47 -- common/autotest_common.sh@941 -- # uname 00:16:40.250 14:30:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.250 14:30:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70931 00:16:40.508 14:30:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:40.508 14:30:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:40.508 killing process with pid 70931 00:16:40.508 14:30:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70931' 00:16:40.508 14:30:47 -- common/autotest_common.sh@955 -- # kill 70931 00:16:40.508 14:30:47 -- common/autotest_common.sh@960 -- # wait 70931 00:16:40.777 14:30:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.777 14:30:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:40.777 14:30:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:40.777 14:30:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.777 14:30:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:40.777 14:30:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.777 14:30:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.777 14:30:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.777 14:30:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:40.777 ************************************ 00:16:40.777 END TEST nvmf_delete_subsystem 00:16:40.777 ************************************ 00:16:40.777 00:16:40.777 real 0m9.702s 00:16:40.777 user 0m29.360s 00:16:40.777 sys 0m1.538s 00:16:40.777 14:30:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:40.777 14:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:40.777 14:30:47 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:16:40.777 14:30:47 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:16:40.777 14:30:47 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:40.777 14:30:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.777 14:30:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.777 14:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:40.777 ************************************ 00:16:40.777 START TEST nvmf_vfio_user 00:16:40.777 ************************************ 00:16:40.777 14:30:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:41.051 * Looking for test storage... 00:16:41.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.051 14:30:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:41.051 14:30:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:41.051 14:30:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:41.051 14:30:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:41.051 14:30:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:41.051 14:30:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:41.051 14:30:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:41.051 14:30:47 -- scripts/common.sh@335 -- # IFS=.-: 00:16:41.051 14:30:47 -- scripts/common.sh@335 -- # read -ra ver1 00:16:41.051 14:30:47 -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.051 14:30:47 -- scripts/common.sh@336 -- # read -ra ver2 00:16:41.051 14:30:47 -- scripts/common.sh@337 -- # local 'op=<' 00:16:41.051 14:30:47 -- scripts/common.sh@339 -- # ver1_l=2 00:16:41.051 14:30:47 -- scripts/common.sh@340 -- # ver2_l=1 00:16:41.051 14:30:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:41.051 14:30:47 -- scripts/common.sh@343 -- # case "$op" in 00:16:41.051 14:30:47 -- scripts/common.sh@344 -- # : 1 00:16:41.051 14:30:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:41.051 14:30:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.051 14:30:47 -- scripts/common.sh@364 -- # decimal 1 00:16:41.051 14:30:47 -- scripts/common.sh@352 -- # local d=1 00:16:41.051 14:30:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.051 14:30:47 -- scripts/common.sh@354 -- # echo 1 00:16:41.051 14:30:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:41.051 14:30:47 -- scripts/common.sh@365 -- # decimal 2 00:16:41.051 14:30:47 -- scripts/common.sh@352 -- # local d=2 00:16:41.051 14:30:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.051 14:30:47 -- scripts/common.sh@354 -- # echo 2 00:16:41.051 14:30:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:41.051 14:30:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:41.051 14:30:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:41.051 14:30:47 -- scripts/common.sh@367 -- # return 0 00:16:41.051 14:30:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.051 14:30:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.051 --rc genhtml_branch_coverage=1 00:16:41.051 --rc genhtml_function_coverage=1 00:16:41.051 --rc genhtml_legend=1 00:16:41.051 --rc geninfo_all_blocks=1 00:16:41.051 --rc geninfo_unexecuted_blocks=1 00:16:41.051 00:16:41.051 ' 00:16:41.051 14:30:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.051 --rc genhtml_branch_coverage=1 00:16:41.051 --rc genhtml_function_coverage=1 00:16:41.051 --rc genhtml_legend=1 00:16:41.051 --rc geninfo_all_blocks=1 00:16:41.051 --rc geninfo_unexecuted_blocks=1 00:16:41.051 00:16:41.051 ' 00:16:41.051 14:30:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.051 --rc genhtml_branch_coverage=1 00:16:41.051 --rc genhtml_function_coverage=1 00:16:41.051 --rc genhtml_legend=1 00:16:41.051 --rc geninfo_all_blocks=1 00:16:41.051 --rc geninfo_unexecuted_blocks=1 00:16:41.051 00:16:41.052 ' 00:16:41.052 14:30:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.052 --rc genhtml_branch_coverage=1 00:16:41.052 --rc genhtml_function_coverage=1 00:16:41.052 --rc genhtml_legend=1 00:16:41.052 --rc geninfo_all_blocks=1 00:16:41.052 --rc geninfo_unexecuted_blocks=1 00:16:41.052 00:16:41.052 ' 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.052 14:30:47 -- nvmf/common.sh@7 -- # uname -s 00:16:41.052 14:30:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.052 14:30:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.052 14:30:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.052 14:30:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.052 14:30:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.052 14:30:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.052 14:30:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.052 14:30:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.052 14:30:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.052 14:30:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.052 14:30:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:41.052 14:30:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:16:41.052 14:30:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.052 14:30:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.052 14:30:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.052 14:30:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.052 14:30:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.052 14:30:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.052 14:30:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.052 14:30:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.052 14:30:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.052 14:30:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.052 14:30:47 -- paths/export.sh@5 -- # export PATH 00:16:41.052 14:30:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.052 14:30:47 -- nvmf/common.sh@46 -- # : 0 00:16:41.052 14:30:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.052 14:30:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.052 14:30:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.052 14:30:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.052 14:30:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.052 14:30:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.052 14:30:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.052 14:30:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71168 00:16:41.052 Process pid: 71168 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71168' 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:41.052 14:30:47 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71168 00:16:41.052 14:30:47 -- common/autotest_common.sh@829 -- # '[' -z 71168 ']' 00:16:41.052 14:30:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.052 14:30:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.052 14:30:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.052 14:30:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.052 14:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:41.311 [2024-12-06 14:30:48.025181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.311 [2024-12-06 14:30:48.025341] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.311 [2024-12-06 14:30:48.164208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.571 [2024-12-06 14:30:48.323062] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.571 [2024-12-06 14:30:48.323291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.571 [2024-12-06 14:30:48.323305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.571 [2024-12-06 14:30:48.323314] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.571 [2024-12-06 14:30:48.323524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.571 [2024-12-06 14:30:48.324042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.571 [2024-12-06 14:30:48.324210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.571 [2024-12-06 14:30:48.324213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.138 14:30:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.138 14:30:49 -- common/autotest_common.sh@862 -- # return 0 00:16:42.138 14:30:49 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:43.513 14:30:50 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:43.513 14:30:50 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:43.513 14:30:50 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:43.513 14:30:50 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:43.513 14:30:50 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:43.513 14:30:50 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:43.771 Malloc1 00:16:43.771 14:30:50 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:44.029 14:30:50 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:44.286 14:30:51 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:44.544 14:30:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:44.544 14:30:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:44.544 14:30:51 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:44.802 Malloc2 00:16:44.802 14:30:51 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:45.059 14:30:51 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:45.318 14:30:52 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:45.576 14:30:52 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:45.576 14:30:52 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:45.576 14:30:52 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:45.576 14:30:52 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:45.576 14:30:52 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:45.576 14:30:52 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:45.576 [2024-12-06 14:30:52.481325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:45.576 [2024-12-06 14:30:52.481388] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:16:45.837 [2024-12-06 14:30:52.625570] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:45.837 [2024-12-06 14:30:52.633942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:45.837 [2024-12-06 14:30:52.633986] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd5628f2000 00:16:45.837 [2024-12-06 14:30:52.634925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.635915] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.636926] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.637940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.638931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.639933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.640936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.641947] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.837 [2024-12-06 14:30:52.642952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:45.837 [2024-12-06 14:30:52.642978] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd5628e7000 00:16:45.837 [2024-12-06 14:30:52.644260] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:45.837 [2024-12-06 14:30:52.664100] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:45.837 [2024-12-06 14:30:52.664175] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:45.837 [2024-12-06 14:30:52.667143] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:45.837 [2024-12-06 14:30:52.667243] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:45.837 [2024-12-06 14:30:52.667403] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:45.837 [2024-12-06 14:30:52.667452] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:45.837 [2024-12-06 14:30:52.667461] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:45.837 [2024-12-06 14:30:52.668121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:45.837 [2024-12-06 14:30:52.668145] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:45.837 [2024-12-06 14:30:52.668157] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:45.837 [2024-12-06 14:30:52.669147] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:45.837 [2024-12-06 14:30:52.669173] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:45.837 [2024-12-06 14:30:52.669186] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:45.837 [2024-12-06 14:30:52.670133] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:45.837 [2024-12-06 14:30:52.670169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:45.837 [2024-12-06 14:30:52.671158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:45.837 [2024-12-06 14:30:52.671181] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:45.837 [2024-12-06 14:30:52.671189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:45.837 [2024-12-06 14:30:52.671199] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:45.837 [2024-12-06 14:30:52.671306] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:45.837 [2024-12-06 14:30:52.671312] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:45.837 [2024-12-06 14:30:52.671318] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:45.837 [2024-12-06 14:30:52.672158] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:45.837 [2024-12-06 14:30:52.673147] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:45.838 [2024-12-06 14:30:52.674153] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:45.838 [2024-12-06 14:30:52.675202] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:45.838 [2024-12-06 14:30:52.676179] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:45.838 [2024-12-06 14:30:52.676208] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:45.838 [2024-12-06 14:30:52.676215] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676238] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:45.838 [2024-12-06 14:30:52.676258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676278] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:45.838 [2024-12-06 14:30:52.676284] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.838 [2024-12-06 14:30:52.676307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.676442] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:45.838 [2024-12-06 14:30:52.676449] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:45.838 [2024-12-06 14:30:52.676454] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:45.838 [2024-12-06 14:30:52.676459] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:45.838 [2024-12-06 14:30:52.676465] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:45.838 [2024-12-06 14:30:52.676470] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:45.838 [2024-12-06 14:30:52.676476] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676492] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.676542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.838 [2024-12-06 14:30:52.676552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.838 [2024-12-06 14:30:52.676574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.838 [2024-12-06 14:30:52.676584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.838 [2024-12-06 14:30:52.676589] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676604] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.676641] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:45.838 [2024-12-06 14:30:52.676647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.676756] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676767] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676777] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:45.838 [2024-12-06 14:30:52.676782] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:45.838 [2024-12-06 14:30:52.676790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.676821] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:45.838 [2024-12-06 14:30:52.676833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676844] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676852] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:45.838 [2024-12-06 14:30:52.676857] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.838 [2024-12-06 14:30:52.676864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.676917] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676939] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.676946] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:45.838 [2024-12-06 14:30:52.676951] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.838 [2024-12-06 14:30:52.676965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.676998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.677009] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.677017] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.677029] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.677036] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.677042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.677047] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:45.838 [2024-12-06 14:30:52.677052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:45.838 [2024-12-06 14:30:52.677058] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:45.838 [2024-12-06 14:30:52.677091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.677110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:45.838 [2024-12-06 14:30:52.677126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:45.838 [2024-12-06 14:30:52.677141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:45.839 [2024-12-06 14:30:52.677156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:45.839 [2024-12-06 14:30:52.677186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:45.839 [2024-12-06 14:30:52.677200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:45.839 [2024-12-06 14:30:52.677219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:45.839 [2024-12-06 14:30:52.677238] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:45.839 [2024-12-06 14:30:52.677244] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:45.839 [2024-12-06 14:30:52.677247] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:45.839 [2024-12-06 14:30:52.677251] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:45.839 [2024-12-06 14:30:52.677259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:45.839 [2024-12-06 14:30:52.677267] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:45.839 [2024-12-06 14:30:52.677272] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:45.839 [2024-12-06 14:30:52.677279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:45.839 [2024-12-06 14:30:52.677287] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:45.839 [2024-12-06 14:30:52.677292] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.839 [2024-12-06 14:30:52.677299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.839 [2024-12-06 14:30:52.677308] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:45.839 [2024-12-06 14:30:52.677312] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:45.839 [2024-12-06 14:30:52.677319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:45.839 [2024-12-06 14:30:52.677327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:45.839 [2024-12-06 14:30:52.677347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:45.839 [2024-12-06 14:30:52.677360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:45.839 [2024-12-06 14:30:52.677369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:45.839 ===================================================== 00:16:45.839 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:45.839 ===================================================== 00:16:45.839 Controller Capabilities/Features 00:16:45.839 ================================ 00:16:45.839 Vendor ID: 4e58 00:16:45.839 Subsystem Vendor ID: 4e58 00:16:45.839 Serial Number: SPDK1 00:16:45.839 Model Number: SPDK bdev Controller 00:16:45.839 Firmware Version: 24.01.1 00:16:45.839 Recommended Arb Burst: 6 00:16:45.839 IEEE OUI Identifier: 8d 6b 50 00:16:45.839 Multi-path I/O 00:16:45.839 May have multiple subsystem ports: Yes 00:16:45.839 May have multiple controllers: Yes 00:16:45.839 Associated with SR-IOV VF: No 00:16:45.839 Max Data Transfer Size: 131072 00:16:45.839 Max Number of Namespaces: 32 00:16:45.839 Max Number of I/O Queues: 127 00:16:45.839 NVMe Specification Version (VS): 1.3 00:16:45.839 NVMe Specification Version (Identify): 1.3 00:16:45.839 Maximum Queue Entries: 256 00:16:45.839 Contiguous Queues Required: Yes 00:16:45.839 Arbitration Mechanisms Supported 00:16:45.839 Weighted Round Robin: Not Supported 00:16:45.839 Vendor Specific: Not Supported 00:16:45.839 Reset Timeout: 15000 ms 00:16:45.839 Doorbell Stride: 4 bytes 00:16:45.839 NVM Subsystem Reset: Not Supported 00:16:45.839 Command Sets Supported 00:16:45.839 NVM Command Set: Supported 00:16:45.839 Boot Partition: Not Supported 00:16:45.839 Memory Page Size Minimum: 4096 bytes 00:16:45.839 Memory Page Size Maximum: 4096 bytes 00:16:45.839 Persistent Memory Region: Not Supported 00:16:45.839 Optional Asynchronous Events Supported 00:16:45.839 Namespace Attribute Notices: Supported 00:16:45.839 Firmware Activation Notices: Not Supported 00:16:45.839 ANA Change Notices: Not Supported 00:16:45.839 PLE Aggregate Log Change Notices: Not Supported 00:16:45.839 LBA Status Info Alert Notices: Not Supported 00:16:45.839 EGE Aggregate Log Change Notices: Not Supported 00:16:45.839 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.839 Zone Descriptor Change Notices: Not Supported 00:16:45.839 Discovery Log Change Notices: Not Supported 00:16:45.839 Controller Attributes 00:16:45.839 128-bit Host Identifier: Supported 00:16:45.839 Non-Operational Permissive Mode: Not Supported 00:16:45.839 NVM Sets: Not Supported 00:16:45.839 Read Recovery Levels: Not Supported 00:16:45.839 Endurance Groups: Not Supported 00:16:45.839 Predictable Latency Mode: Not Supported 00:16:45.839 Traffic Based Keep ALive: Not Supported 00:16:45.839 Namespace Granularity: Not Supported 00:16:45.839 SQ Associations: Not Supported 00:16:45.839 UUID List: Not Supported 00:16:45.839 Multi-Domain Subsystem: Not Supported 00:16:45.839 Fixed Capacity Management: Not Supported 00:16:45.839 Variable Capacity Management: Not Supported 00:16:45.839 Delete Endurance Group: Not Supported 00:16:45.839 Delete NVM Set: Not Supported 00:16:45.839 Extended LBA Formats Supported: Not Supported 00:16:45.839 Flexible Data Placement Supported: Not Supported 00:16:45.839 00:16:45.839 Controller Memory Buffer Support 00:16:45.839 ================================ 00:16:45.839 Supported: No 00:16:45.839 00:16:45.839 Persistent Memory Region Support 00:16:45.839 ================================ 00:16:45.839 Supported: No 00:16:45.839 00:16:45.839 Admin Command Set Attributes 00:16:45.839 ============================ 00:16:45.839 Security Send/Receive: Not Supported 00:16:45.839 Format NVM: Not Supported 00:16:45.839 Firmware Activate/Download: Not Supported 00:16:45.839 Namespace Management: Not Supported 00:16:45.839 Device Self-Test: Not Supported 00:16:45.839 Directives: Not Supported 00:16:45.839 NVMe-MI: Not Supported 00:16:45.839 Virtualization Management: Not Supported 00:16:45.839 Doorbell Buffer Config: Not Supported 00:16:45.839 Get LBA Status Capability: Not Supported 00:16:45.839 Command & Feature Lockdown Capability: Not Supported 00:16:45.839 Abort Command Limit: 4 00:16:45.839 Async Event Request Limit: 4 00:16:45.839 Number of Firmware Slots: N/A 00:16:45.839 Firmware Slot 1 Read-Only: N/A 00:16:45.839 Firmware Activation Without Reset: N/A 00:16:45.839 Multiple Update Detection Support: N/A 00:16:45.839 Firmware Update Granularity: No Information Provided 00:16:45.839 Per-Namespace SMART Log: No 00:16:45.839 Asymmetric Namespace Access Log Page: Not Supported 00:16:45.839 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:45.839 Command Effects Log Page: Supported 00:16:45.839 Get Log Page Extended Data: Supported 00:16:45.839 Telemetry Log Pages: Not Supported 00:16:45.839 Persistent Event Log Pages: Not Supported 00:16:45.839 Supported Log Pages Log Page: May Support 00:16:45.839 Commands Supported & Effects Log Page: Not Supported 00:16:45.839 Feature Identifiers & Effects Log Page:May Support 00:16:45.839 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.839 Data Area 4 for Telemetry Log: Not Supported 00:16:45.839 Error Log Page Entries Supported: 128 00:16:45.839 Keep Alive: Supported 00:16:45.839 Keep Alive Granularity: 10000 ms 00:16:45.839 00:16:45.839 NVM Command Set Attributes 00:16:45.839 ========================== 00:16:45.839 Submission Queue Entry Size 00:16:45.839 Max: 64 00:16:45.839 Min: 64 00:16:45.839 Completion Queue Entry Size 00:16:45.839 Max: 16 00:16:45.839 Min: 16 00:16:45.839 Number of Namespaces: 32 00:16:45.839 Compare Command: Supported 00:16:45.839 Write Uncorrectable Command: Not Supported 00:16:45.839 Dataset Management Command: Supported 00:16:45.839 Write Zeroes Command: Supported 00:16:45.839 Set Features Save Field: Not Supported 00:16:45.839 Reservations: Not Supported 00:16:45.839 Timestamp: Not Supported 00:16:45.839 Copy: Supported 00:16:45.839 Volatile Write Cache: Present 00:16:45.839 Atomic Write Unit (Normal): 1 00:16:45.839 Atomic Write Unit (PFail): 1 00:16:45.839 Atomic Compare & Write Unit: 1 00:16:45.839 Fused Compare & Write: Supported 00:16:45.839 Scatter-Gather List 00:16:45.839 SGL Command Set: Supported (Dword aligned) 00:16:45.839 SGL Keyed: Not Supported 00:16:45.839 SGL Bit Bucket Descriptor: Not Supported 00:16:45.839 SGL Metadata Pointer: Not Supported 00:16:45.839 Oversized SGL: Not Supported 00:16:45.839 SGL Metadata Address: Not Supported 00:16:45.839 SGL Offset: Not Supported 00:16:45.839 Transport SGL Data Block: Not Supported 00:16:45.839 Replay Protected Memory Block: Not Supported 00:16:45.839 00:16:45.840 Firmware Slot Information 00:16:45.840 ========================= 00:16:45.840 Active slot: 1 00:16:45.840 Slot 1 Firmware Revision: 24.01.1 00:16:45.840 00:16:45.840 00:16:45.840 Commands Supported and Effects 00:16:45.840 ============================== 00:16:45.840 Admin Commands 00:16:45.840 -------------- 00:16:45.840 Get Log Page (02h): Supported 00:16:45.840 Identify (06h): Supported 00:16:45.840 Abort (08h): Supported 00:16:45.840 Set Features (09h): Supported 00:16:45.840 Get Features (0Ah): Supported 00:16:45.840 Asynchronous Event Request (0Ch): Supported 00:16:45.840 Keep Alive (18h): Supported 00:16:45.840 I/O Commands 00:16:45.840 ------------ 00:16:45.840 Flush (00h): Supported LBA-Change 00:16:45.840 Write (01h): Supported LBA-Change 00:16:45.840 Read (02h): Supported 00:16:45.840 Compare (05h): Supported 00:16:45.840 Write Zeroes (08h): Supported LBA-Change 00:16:45.840 Dataset Management (09h): Supported LBA-Change 00:16:45.840 Copy (19h): Supported LBA-Change 00:16:45.840 Unknown (79h): Supported LBA-Change 00:16:45.840 Unknown (7Ah): Supported 00:16:45.840 00:16:45.840 Error Log 00:16:45.840 ========= 00:16:45.840 00:16:45.840 Arbitration 00:16:45.840 =========== 00:16:45.840 Arbitration Burst: 1 00:16:45.840 00:16:45.840 Power Management 00:16:45.840 ================ 00:16:45.840 Number of Power States: 1 00:16:45.840 Current Power State: Power State #0 00:16:45.840 Power State #0: 00:16:45.840 Max Power: 0.00 W 00:16:45.840 Non-Operational State: Operational 00:16:45.840 Entry Latency: Not Reported 00:16:45.840 Exit Latency: Not Reported 00:16:45.840 Relative Read Throughput: 0 00:16:45.840 Relative Read Latency: 0 00:16:45.840 Relative Write Throughput: 0 00:16:45.840 Relative Write Latency: 0 00:16:45.840 Idle Power: Not Reported 00:16:45.840 Active Power: Not Reported 00:16:45.840 Non-Operational Permissive Mode: Not Supported 00:16:45.840 00:16:45.840 Health Information 00:16:45.840 ================== 00:16:45.840 Critical Warnings: 00:16:45.840 Available Spare Space: OK 00:16:45.840 Temperature: OK 00:16:45.840 Device Reliability: OK 00:16:45.840 Read Only: No 00:16:45.840 Volatile Memory Backup: OK 00:16:45.840 Current Temperature: 0 Kelvin[2024-12-06 14:30:52.677545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:45.840 [2024-12-06 14:30:52.677574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:45.840 [2024-12-06 14:30:52.677611] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:45.840 [2024-12-06 14:30:52.677623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.840 [2024-12-06 14:30:52.677631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.840 [2024-12-06 14:30:52.677638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.840 [2024-12-06 14:30:52.677645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.840 [2024-12-06 14:30:52.680427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:45.840 [2024-12-06 14:30:52.680459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:45.840 [2024-12-06 14:30:52.681286] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:45.840 [2024-12-06 14:30:52.681308] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:45.840 [2024-12-06 14:30:52.682228] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:45.840 [2024-12-06 14:30:52.682260] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:45.840 [2024-12-06 14:30:52.682552] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:45.840 [2024-12-06 14:30:52.685429] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:45.840 (-273 Celsius) 00:16:45.840 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:45.840 Available Spare: 0% 00:16:45.840 Available Spare Threshold: 0% 00:16:45.840 Life Percentage Used: 0% 00:16:45.840 Data Units Read: 0 00:16:45.840 Data Units Written: 0 00:16:45.840 Host Read Commands: 0 00:16:45.840 Host Write Commands: 0 00:16:45.840 Controller Busy Time: 0 minutes 00:16:45.840 Power Cycles: 0 00:16:45.840 Power On Hours: 0 hours 00:16:45.840 Unsafe Shutdowns: 0 00:16:45.840 Unrecoverable Media Errors: 0 00:16:45.840 Lifetime Error Log Entries: 0 00:16:45.840 Warning Temperature Time: 0 minutes 00:16:45.840 Critical Temperature Time: 0 minutes 00:16:45.840 00:16:45.840 Number of Queues 00:16:45.840 ================ 00:16:45.840 Number of I/O Submission Queues: 127 00:16:45.840 Number of I/O Completion Queues: 127 00:16:45.840 00:16:45.840 Active Namespaces 00:16:45.840 ================= 00:16:45.840 Namespace ID:1 00:16:45.840 Error Recovery Timeout: Unlimited 00:16:45.840 Command Set Identifier: NVM (00h) 00:16:45.840 Deallocate: Supported 00:16:45.840 Deallocated/Unwritten Error: Not Supported 00:16:45.840 Deallocated Read Value: Unknown 00:16:45.840 Deallocate in Write Zeroes: Not Supported 00:16:45.840 Deallocated Guard Field: 0xFFFF 00:16:45.840 Flush: Supported 00:16:45.840 Reservation: Supported 00:16:45.840 Namespace Sharing Capabilities: Multiple Controllers 00:16:45.840 Size (in LBAs): 131072 (0GiB) 00:16:45.840 Capacity (in LBAs): 131072 (0GiB) 00:16:45.840 Utilization (in LBAs): 131072 (0GiB) 00:16:45.840 NGUID: 0ED3C5A91DAF4571A3A4B9D52896F966 00:16:45.840 UUID: 0ed3c5a9-1daf-4571-a3a4-b9d52896f966 00:16:45.840 Thin Provisioning: Not Supported 00:16:45.840 Per-NS Atomic Units: Yes 00:16:45.840 Atomic Boundary Size (Normal): 0 00:16:45.840 Atomic Boundary Size (PFail): 0 00:16:45.840 Atomic Boundary Offset: 0 00:16:45.840 Maximum Single Source Range Length: 65535 00:16:45.840 Maximum Copy Length: 65535 00:16:45.840 Maximum Source Range Count: 1 00:16:45.840 NGUID/EUI64 Never Reused: No 00:16:45.840 Namespace Write Protected: No 00:16:45.840 Number of LBA Formats: 1 00:16:45.840 Current LBA Format: LBA Format #00 00:16:45.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:45.840 00:16:45.840 14:30:52 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:51.172 Initializing NVMe Controllers 00:16:51.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:51.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:51.172 Initialization complete. Launching workers. 00:16:51.172 ======================================================== 00:16:51.172 Latency(us) 00:16:51.172 Device Information : IOPS MiB/s Average min max 00:16:51.172 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 29336.12 114.59 4362.74 1112.74 12131.07 00:16:51.172 ======================================================== 00:16:51.172 Total : 29336.12 114.59 4362.74 1112.74 12131.07 00:16:51.172 00:16:51.172 14:30:58 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:57.788 Initializing NVMe Controllers 00:16:57.788 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:57.788 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:57.788 Initialization complete. Launching workers. 00:16:57.788 ======================================================== 00:16:57.788 Latency(us) 00:16:57.788 Device Information : IOPS MiB/s Average min max 00:16:57.788 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15850.05 61.91 8083.33 4021.33 14769.40 00:16:57.788 ======================================================== 00:16:57.788 Total : 15850.05 61.91 8083.33 4021.33 14769.40 00:16:57.788 00:16:57.788 14:31:03 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:02.025 Initializing NVMe Controllers 00:17:02.025 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:02.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:02.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:02.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:02.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:02.025 Initialization complete. Launching workers. 00:17:02.025 Starting thread on core 2 00:17:02.025 Starting thread on core 3 00:17:02.025 Starting thread on core 1 00:17:02.025 14:31:08 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:05.340 Initializing NVMe Controllers 00:17:05.340 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.340 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:05.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:05.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:05.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:05.340 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:05.340 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:05.340 Initialization complete. Launching workers. 00:17:05.340 Starting thread on core 1 with urgent priority queue 00:17:05.340 Starting thread on core 2 with urgent priority queue 00:17:05.340 Starting thread on core 0 with urgent priority queue 00:17:05.340 Starting thread on core 3 with urgent priority queue 00:17:05.340 SPDK bdev Controller (SPDK1 ) core 0: 7572.67 IO/s 13.21 secs/100000 ios 00:17:05.340 SPDK bdev Controller (SPDK1 ) core 1: 6974.33 IO/s 14.34 secs/100000 ios 00:17:05.340 SPDK bdev Controller (SPDK1 ) core 2: 7345.67 IO/s 13.61 secs/100000 ios 00:17:05.340 SPDK bdev Controller (SPDK1 ) core 3: 7205.33 IO/s 13.88 secs/100000 ios 00:17:05.340 ======================================================== 00:17:05.340 00:17:05.340 14:31:12 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:05.907 Initializing NVMe Controllers 00:17:05.907 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.907 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.907 Namespace ID: 1 size: 0GB 00:17:05.907 Initialization complete. 00:17:05.907 INFO: using host memory buffer for IO 00:17:05.907 Hello world! 00:17:05.907 14:31:12 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:07.284 Initializing NVMe Controllers 00:17:07.284 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:07.284 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:07.284 Initialization complete. Launching workers. 00:17:07.284 submit (in ns) avg, min, max = 6709.3, 3535.5, 4035644.1 00:17:07.284 complete (in ns) avg, min, max = 26996.4, 2068.2, 5064100.9 00:17:07.284 00:17:07.284 Submit histogram 00:17:07.284 ================ 00:17:07.284 Range in us Cumulative Count 00:17:07.284 3.535 - 3.549: 0.1233% ( 16) 00:17:07.284 3.549 - 3.564: 0.7395% ( 80) 00:17:07.284 3.564 - 3.578: 2.0569% ( 171) 00:17:07.284 3.578 - 3.593: 3.7825% ( 224) 00:17:07.284 3.593 - 3.607: 4.6530% ( 113) 00:17:07.284 3.607 - 3.622: 5.3078% ( 85) 00:17:07.284 3.622 - 3.636: 5.8393% ( 69) 00:17:07.284 3.636 - 3.651: 6.4248% ( 76) 00:17:07.284 3.651 - 3.665: 8.7204% ( 298) 00:17:07.284 3.665 - 3.680: 13.0036% ( 556) 00:17:07.284 3.680 - 3.695: 18.6349% ( 731) 00:17:07.284 3.695 - 3.709: 21.6855% ( 396) 00:17:07.284 3.709 - 3.724: 24.8286% ( 408) 00:17:07.284 3.724 - 3.753: 29.7204% ( 635) 00:17:07.284 3.753 - 3.782: 40.7442% ( 1431) 00:17:07.284 3.782 - 3.811: 63.3695% ( 2937) 00:17:07.284 3.811 - 3.840: 75.4641% ( 1570) 00:17:07.284 3.840 - 3.869: 82.3511% ( 894) 00:17:07.284 3.869 - 3.898: 84.7700% ( 314) 00:17:07.284 3.898 - 3.927: 86.1259% ( 176) 00:17:07.284 3.927 - 3.956: 87.0580% ( 121) 00:17:07.284 3.956 - 3.985: 87.8669% ( 105) 00:17:07.284 3.985 - 4.015: 88.8683% ( 130) 00:17:07.284 4.015 - 4.044: 90.1240% ( 163) 00:17:07.284 4.044 - 4.073: 91.3104% ( 154) 00:17:07.284 4.073 - 4.102: 91.9960% ( 89) 00:17:07.284 4.102 - 4.131: 93.8526% ( 241) 00:17:07.284 4.131 - 4.160: 95.5011% ( 214) 00:17:07.284 4.160 - 4.189: 96.7106% ( 157) 00:17:07.284 4.189 - 4.218: 97.2190% ( 66) 00:17:07.284 4.218 - 4.247: 97.4424% ( 29) 00:17:07.284 4.247 - 4.276: 97.5195% ( 10) 00:17:07.284 4.276 - 4.305: 97.6042% ( 11) 00:17:07.284 4.305 - 4.335: 97.6812% ( 10) 00:17:07.284 4.335 - 4.364: 97.7120% ( 4) 00:17:07.284 4.364 - 4.393: 97.7352% ( 3) 00:17:07.284 4.393 - 4.422: 97.7583% ( 3) 00:17:07.284 4.422 - 4.451: 97.7891% ( 4) 00:17:07.284 4.451 - 4.480: 97.8045% ( 2) 00:17:07.284 4.480 - 4.509: 97.8199% ( 2) 00:17:07.284 4.538 - 4.567: 97.8353% ( 2) 00:17:07.284 4.625 - 4.655: 97.8430% ( 1) 00:17:07.284 4.655 - 4.684: 97.8507% ( 1) 00:17:07.284 4.684 - 4.713: 97.8661% ( 2) 00:17:07.284 4.713 - 4.742: 97.8892% ( 3) 00:17:07.284 4.742 - 4.771: 97.9046% ( 2) 00:17:07.284 4.771 - 4.800: 97.9431% ( 5) 00:17:07.284 4.800 - 4.829: 98.0125% ( 9) 00:17:07.284 4.829 - 4.858: 98.0664% ( 7) 00:17:07.284 4.858 - 4.887: 98.1357% ( 9) 00:17:07.284 4.887 - 4.916: 98.1743% ( 5) 00:17:07.284 4.916 - 4.945: 98.2205% ( 6) 00:17:07.284 4.945 - 4.975: 98.3283% ( 14) 00:17:07.284 4.975 - 5.004: 98.3745% ( 6) 00:17:07.284 5.004 - 5.033: 98.4362% ( 8) 00:17:07.284 5.033 - 5.062: 98.5055% ( 9) 00:17:07.284 5.062 - 5.091: 98.5517% ( 6) 00:17:07.284 5.091 - 5.120: 98.5671% ( 2) 00:17:07.284 5.120 - 5.149: 98.5980% ( 4) 00:17:07.284 5.149 - 5.178: 98.6673% ( 9) 00:17:07.284 5.178 - 5.207: 98.6981% ( 4) 00:17:07.284 5.236 - 5.265: 98.7135% ( 2) 00:17:07.284 5.265 - 5.295: 98.7212% ( 1) 00:17:07.284 5.295 - 5.324: 98.7597% ( 5) 00:17:07.284 5.324 - 5.353: 98.7751% ( 2) 00:17:07.284 5.353 - 5.382: 98.7828% ( 1) 00:17:07.284 5.382 - 5.411: 98.7905% ( 1) 00:17:07.284 5.411 - 5.440: 98.7982% ( 1) 00:17:07.284 5.556 - 5.585: 98.8059% ( 1) 00:17:07.284 5.585 - 5.615: 98.8214% ( 2) 00:17:07.284 5.644 - 5.673: 98.8368% ( 2) 00:17:07.284 5.702 - 5.731: 98.8445% ( 1) 00:17:07.284 5.789 - 5.818: 98.8599% ( 2) 00:17:07.284 6.196 - 6.225: 98.8676% ( 1) 00:17:07.284 6.400 - 6.429: 98.8753% ( 1) 00:17:07.284 6.749 - 6.778: 98.8830% ( 1) 00:17:07.284 6.807 - 6.836: 98.8907% ( 1) 00:17:07.284 7.244 - 7.273: 98.8984% ( 1) 00:17:07.284 7.680 - 7.738: 98.9061% ( 1) 00:17:07.284 7.971 - 8.029: 98.9215% ( 2) 00:17:07.285 8.087 - 8.145: 98.9292% ( 1) 00:17:07.285 8.204 - 8.262: 98.9446% ( 2) 00:17:07.285 8.320 - 8.378: 98.9600% ( 2) 00:17:07.285 8.960 - 9.018: 98.9677% ( 1) 00:17:07.285 9.251 - 9.309: 98.9831% ( 2) 00:17:07.285 9.425 - 9.484: 98.9908% ( 1) 00:17:07.285 9.484 - 9.542: 98.9985% ( 1) 00:17:07.285 9.542 - 9.600: 99.0216% ( 3) 00:17:07.285 9.658 - 9.716: 99.0294% ( 1) 00:17:07.285 9.716 - 9.775: 99.0679% ( 5) 00:17:07.285 9.775 - 9.833: 99.0756% ( 1) 00:17:07.285 9.833 - 9.891: 99.0833% ( 1) 00:17:07.285 9.949 - 10.007: 99.1064% ( 3) 00:17:07.285 10.007 - 10.065: 99.1218% ( 2) 00:17:07.285 10.065 - 10.124: 99.1295% ( 1) 00:17:07.285 10.182 - 10.240: 99.1372% ( 1) 00:17:07.285 10.240 - 10.298: 99.1449% ( 1) 00:17:07.285 10.356 - 10.415: 99.1680% ( 3) 00:17:07.285 10.415 - 10.473: 99.1834% ( 2) 00:17:07.285 10.473 - 10.531: 99.1911% ( 1) 00:17:07.285 10.531 - 10.589: 99.2219% ( 4) 00:17:07.285 10.647 - 10.705: 99.2296% ( 1) 00:17:07.285 10.764 - 10.822: 99.2451% ( 2) 00:17:07.285 10.822 - 10.880: 99.2528% ( 1) 00:17:07.285 10.996 - 11.055: 99.2605% ( 1) 00:17:07.285 11.113 - 11.171: 99.2682% ( 1) 00:17:07.285 11.171 - 11.229: 99.2836% ( 2) 00:17:07.285 11.229 - 11.287: 99.2913% ( 1) 00:17:07.285 11.287 - 11.345: 99.3067% ( 2) 00:17:07.285 11.345 - 11.404: 99.3221% ( 2) 00:17:07.285 11.462 - 11.520: 99.3298% ( 1) 00:17:07.285 11.520 - 11.578: 99.3452% ( 2) 00:17:07.285 11.578 - 11.636: 99.3529% ( 1) 00:17:07.285 11.985 - 12.044: 99.3606% ( 1) 00:17:07.285 12.276 - 12.335: 99.3837% ( 3) 00:17:07.285 12.335 - 12.393: 99.3914% ( 1) 00:17:07.285 12.393 - 12.451: 99.3991% ( 1) 00:17:07.285 12.509 - 12.567: 99.4068% ( 1) 00:17:07.285 12.858 - 12.916: 99.4222% ( 2) 00:17:07.285 12.975 - 13.033: 99.4299% ( 1) 00:17:07.285 13.265 - 13.324: 99.4376% ( 1) 00:17:07.285 14.371 - 14.429: 99.4453% ( 1) 00:17:07.285 14.545 - 14.604: 99.4530% ( 1) 00:17:07.285 15.011 - 15.127: 99.4608% ( 1) 00:17:07.285 15.476 - 15.593: 99.4685% ( 1) 00:17:07.285 16.407 - 16.524: 99.4762% ( 1) 00:17:07.285 16.756 - 16.873: 99.4993% ( 3) 00:17:07.285 17.222 - 17.338: 99.5070% ( 1) 00:17:07.285 17.920 - 18.036: 99.5147% ( 1) 00:17:07.285 18.269 - 18.385: 99.5224% ( 1) 00:17:07.285 18.502 - 18.618: 99.5378% ( 2) 00:17:07.285 18.618 - 18.735: 99.5609% ( 3) 00:17:07.285 18.735 - 18.851: 99.6225% ( 8) 00:17:07.285 18.851 - 18.967: 99.6456% ( 3) 00:17:07.285 18.967 - 19.084: 99.6765% ( 4) 00:17:07.285 19.084 - 19.200: 99.6842% ( 1) 00:17:07.285 19.200 - 19.316: 99.6919% ( 1) 00:17:07.285 19.433 - 19.549: 99.6996% ( 1) 00:17:07.285 19.549 - 19.665: 99.7150% ( 2) 00:17:07.285 19.782 - 19.898: 99.7227% ( 1) 00:17:07.285 19.898 - 20.015: 99.7689% ( 6) 00:17:07.285 20.015 - 20.131: 99.7843% ( 2) 00:17:07.285 20.131 - 20.247: 99.8228% ( 5) 00:17:07.285 20.247 - 20.364: 99.8613% ( 5) 00:17:07.285 20.364 - 20.480: 99.8844% ( 3) 00:17:07.285 20.596 - 20.713: 99.8999% ( 2) 00:17:07.285 20.713 - 20.829: 99.9076% ( 1) 00:17:07.285 21.062 - 21.178: 99.9153% ( 1) 00:17:07.285 21.295 - 21.411: 99.9230% ( 1) 00:17:07.285 21.993 - 22.109: 99.9307% ( 1) 00:17:07.285 3991.738 - 4021.527: 99.9923% ( 8) 00:17:07.285 4021.527 - 4051.316: 100.0000% ( 1) 00:17:07.285 00:17:07.285 Complete histogram 00:17:07.285 ================== 00:17:07.285 Range in us Cumulative Count 00:17:07.285 2.065 - 2.080: 0.0847% ( 11) 00:17:07.285 2.080 - 2.095: 3.0737% ( 388) 00:17:07.285 2.095 - 2.109: 6.3940% ( 431) 00:17:07.285 2.109 - 2.124: 6.8870% ( 64) 00:17:07.285 2.124 - 2.138: 6.9101% ( 3) 00:17:07.285 2.138 - 2.153: 8.1581% ( 162) 00:17:07.285 2.153 - 2.167: 24.9519% ( 2180) 00:17:07.285 2.167 - 2.182: 34.4812% ( 1237) 00:17:07.285 2.182 - 2.196: 35.5674% ( 141) 00:17:07.285 2.196 - 2.211: 35.7523% ( 24) 00:17:07.285 2.211 - 2.225: 36.5149% ( 99) 00:17:07.285 2.225 - 2.240: 59.9029% ( 3036) 00:17:07.285 2.240 - 2.255: 90.4938% ( 3971) 00:17:07.285 2.255 - 2.269: 95.0697% ( 594) 00:17:07.285 2.269 - 2.284: 95.6398% ( 74) 00:17:07.285 2.284 - 2.298: 96.2715% ( 82) 00:17:07.285 2.298 - 2.313: 96.9032% ( 82) 00:17:07.285 2.313 - 2.327: 97.2729% ( 48) 00:17:07.285 2.327 - 2.342: 97.5888% ( 41) 00:17:07.285 2.342 - 2.356: 97.8738% ( 37) 00:17:07.285 2.356 - 2.371: 98.0587% ( 24) 00:17:07.285 2.371 - 2.385: 98.1743% ( 15) 00:17:07.285 2.385 - 2.400: 98.2975% ( 16) 00:17:07.285 2.400 - 2.415: 98.3360% ( 5) 00:17:07.285 2.415 - 2.429: 98.3745% ( 5) 00:17:07.285 2.429 - 2.444: 98.4439% ( 9) 00:17:07.285 2.444 - 2.458: 98.4978% ( 7) 00:17:07.285 2.458 - 2.473: 98.5209% ( 3) 00:17:07.285 2.473 - 2.487: 98.5517% ( 4) 00:17:07.285 2.487 - 2.502: 98.5671% ( 2) 00:17:07.285 2.502 - 2.516: 98.5748% ( 1) 00:17:07.285 2.531 - 2.545: 98.5825% ( 1) 00:17:07.285 2.560 - 2.575: 98.5980% ( 2) 00:17:07.285 2.575 - 2.589: 98.6134% ( 2) 00:17:07.285 2.604 - 2.618: 98.6211% ( 1) 00:17:07.285 2.647 - 2.662: 98.6365% ( 2) 00:17:07.285 2.720 - 2.735: 98.6442% ( 1) 00:17:07.285 2.953 - 2.967: 98.6519% ( 1) 00:17:07.285 3.040 - 3.055: 98.6596% ( 1) 00:17:07.285 3.782 - 3.811: 98.6673% ( 1) 00:17:07.285 3.811 - 3.840: 98.6827% ( 2) 00:17:07.285 3.840 - 3.869: 98.6904% ( 1) 00:17:07.285 3.898 - 3.927: 98.6981% ( 1) 00:17:07.285 3.956 - 3.985: 98.7058% ( 1) 00:17:07.285 3.985 - 4.015: 98.7289% ( 3) 00:17:07.285 4.015 - 4.044: 98.7443% ( 2) 00:17:07.285 4.044 - 4.073: 98.7674% ( 3) 00:17:07.285 4.131 - 4.160: 98.7751% ( 1) 00:17:07.285 4.189 - 4.218: 98.7828% ( 1) 00:17:07.285 4.218 - 4.247: 98.7982% ( 2) 00:17:07.285 4.247 - 4.276: 98.8291% ( 4) 00:17:07.285 4.276 - 4.305: 98.8368% ( 1) 00:17:07.285 4.335 - 4.364: 98.8445% ( 1) 00:17:07.285 4.364 - 4.393: 98.8599% ( 2) 00:17:07.285 4.393 - 4.422: 98.8830% ( 3) 00:17:07.285 4.422 - 4.451: 98.8984% ( 2) 00:17:07.285 4.451 - 4.480: 98.9061% ( 1) 00:17:07.285 4.509 - 4.538: 98.9138% ( 1) 00:17:07.285 4.800 - 4.829: 98.9215% ( 1) 00:17:07.285 4.887 - 4.916: 98.9292% ( 1) 00:17:07.285 4.916 - 4.945: 98.9369% ( 1) 00:17:07.285 5.004 - 5.033: 98.9446% ( 1) 00:17:07.285 5.120 - 5.149: 98.9523% ( 1) 00:17:07.285 5.178 - 5.207: 98.9600% ( 1) 00:17:07.285 7.418 - 7.447: 98.9677% ( 1) 00:17:07.285 7.971 - 8.029: 98.9754% ( 1) 00:17:07.285 8.262 - 8.320: 98.9831% ( 1) 00:17:07.285 8.378 - 8.436: 98.9908% ( 1) 00:17:07.285 8.495 - 8.553: 99.0062% ( 2) 00:17:07.285 8.553 - 8.611: 99.0139% ( 1) 00:17:07.285 8.611 - 8.669: 99.0216% ( 1) 00:17:07.285 8.669 - 8.727: 99.0371% ( 2) 00:17:07.285 8.960 - 9.018: 99.0448% ( 1) 00:17:07.285 9.135 - 9.193: 99.0525% ( 1) 00:17:07.285 9.193 - 9.251: 99.0679% ( 2) 00:17:07.285 9.251 - 9.309: 99.0756% ( 1) 00:17:07.285 9.367 - 9.425: 99.0833% ( 1) 00:17:07.285 9.542 - 9.600: 99.1064% ( 3) 00:17:07.285 10.996 - 11.055: 99.1141% ( 1) 00:17:07.285 13.382 - 13.440: 99.1218% ( 1) 00:17:07.285 13.905 - 13.964: 99.1295% ( 1) 00:17:07.285 14.255 - 14.313: 99.1372% ( 1) 00:17:07.285 14.604 - 14.662: 99.1449% ( 1) 00:17:07.285 15.127 - 15.244: 99.1526% ( 1) 00:17:07.285 15.244 - 15.360: 99.1603% ( 1) 00:17:07.285 15.360 - 15.476: 99.1680% ( 1) 00:17:07.285 15.476 - 15.593: 99.1757% ( 1) 00:17:07.285 15.593 - 15.709: 99.1834% ( 1) 00:17:07.285 15.709 - 15.825: 99.1911% ( 1) 00:17:07.285 15.942 - 16.058: 99.1988% ( 1) 00:17:07.285 16.407 - 16.524: 99.2065% ( 1) 00:17:07.285 16.524 - 16.640: 99.2142% ( 1) 00:17:07.285 16.989 - 17.105: 99.2219% ( 1) 00:17:07.285 17.105 - 17.222: 99.2451% ( 3) 00:17:07.285 17.222 - 17.338: 99.2605% ( 2) 00:17:07.285 17.338 - 17.455: 99.2682% ( 1) 00:17:07.285 17.920 - 18.036: 99.2759% ( 1) 00:17:07.285 18.153 - 18.269: 99.2836% ( 1) 00:17:07.285 18.269 - 18.385: 99.3144% ( 4) 00:17:07.285 18.502 - 18.618: 99.3452% ( 4) 00:17:07.285 18.851 - 18.967: 99.3529% ( 1) 00:17:07.285 983.040 - 990.487: 99.3606% ( 1) 00:17:07.285 1012.829 - 1020.276: 99.3837% ( 3) 00:17:07.285 1995.869 - 2010.764: 99.3914% ( 1) 00:17:07.285 2010.764 - 2025.658: 99.3991% ( 1) 00:17:07.285 2025.658 - 2040.553: 99.4068% ( 1) 00:17:07.285 3038.487 - 3053.382: 99.4222% ( 2) 00:17:07.285 3961.949 - 3991.738: 99.4299% ( 1) 00:17:07.285 3991.738 - 4021.527: 99.9076% ( 62) 00:17:07.285 4021.527 - 4051.316: 99.9692% ( 8) 00:17:07.285 4974.778 - 5004.567: 99.9846% ( 2) 00:17:07.285 5004.567 - 5034.356: 99.9923% ( 1) 00:17:07.285 5034.356 - 5064.145: 100.0000% ( 1) 00:17:07.285 00:17:07.286 14:31:14 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:07.286 14:31:14 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:07.286 14:31:14 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:07.286 14:31:14 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:07.286 14:31:14 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:07.544 [2024-12-06 14:31:14.272628] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:07.544 [ 00:17:07.544 { 00:17:07.544 "allow_any_host": true, 00:17:07.544 "hosts": [], 00:17:07.544 "listen_addresses": [], 00:17:07.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:07.544 "subtype": "Discovery" 00:17:07.544 }, 00:17:07.544 { 00:17:07.544 "allow_any_host": true, 00:17:07.544 "hosts": [], 00:17:07.544 "listen_addresses": [ 00:17:07.544 { 00:17:07.544 "adrfam": "IPv4", 00:17:07.544 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:07.544 "transport": "VFIOUSER", 00:17:07.544 "trsvcid": "0", 00:17:07.544 "trtype": "VFIOUSER" 00:17:07.544 } 00:17:07.544 ], 00:17:07.544 "max_cntlid": 65519, 00:17:07.544 "max_namespaces": 32, 00:17:07.544 "min_cntlid": 1, 00:17:07.544 "model_number": "SPDK bdev Controller", 00:17:07.544 "namespaces": [ 00:17:07.544 { 00:17:07.544 "bdev_name": "Malloc1", 00:17:07.544 "name": "Malloc1", 00:17:07.544 "nguid": "0ED3C5A91DAF4571A3A4B9D52896F966", 00:17:07.544 "nsid": 1, 00:17:07.544 "uuid": "0ed3c5a9-1daf-4571-a3a4-b9d52896f966" 00:17:07.544 } 00:17:07.544 ], 00:17:07.544 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:07.544 "serial_number": "SPDK1", 00:17:07.544 "subtype": "NVMe" 00:17:07.544 }, 00:17:07.544 { 00:17:07.544 "allow_any_host": true, 00:17:07.544 "hosts": [], 00:17:07.544 "listen_addresses": [ 00:17:07.544 { 00:17:07.544 "adrfam": "IPv4", 00:17:07.544 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:07.544 "transport": "VFIOUSER", 00:17:07.544 "trsvcid": "0", 00:17:07.544 "trtype": "VFIOUSER" 00:17:07.544 } 00:17:07.544 ], 00:17:07.544 "max_cntlid": 65519, 00:17:07.544 "max_namespaces": 32, 00:17:07.544 "min_cntlid": 1, 00:17:07.544 "model_number": "SPDK bdev Controller", 00:17:07.544 "namespaces": [ 00:17:07.544 { 00:17:07.544 "bdev_name": "Malloc2", 00:17:07.544 "name": "Malloc2", 00:17:07.544 "nguid": "D8194C05D335404EB7F25E1D26EC3A8B", 00:17:07.544 "nsid": 1, 00:17:07.544 "uuid": "d8194c05-d335-404e-b7f2-5e1d26ec3a8b" 00:17:07.544 } 00:17:07.544 ], 00:17:07.544 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:07.544 "serial_number": "SPDK2", 00:17:07.544 "subtype": "NVMe" 00:17:07.544 } 00:17:07.544 ] 00:17:07.544 14:31:14 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:07.544 14:31:14 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71556 00:17:07.544 14:31:14 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:07.544 14:31:14 -- common/autotest_common.sh@1254 -- # local i=0 00:17:07.544 14:31:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.544 14:31:14 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:07.544 14:31:14 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:17:07.544 14:31:14 -- common/autotest_common.sh@1257 -- # i=1 00:17:07.544 14:31:14 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:07.544 14:31:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.544 14:31:14 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:17:07.544 14:31:14 -- common/autotest_common.sh@1257 -- # i=2 00:17:07.544 14:31:14 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:07.544 14:31:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.544 14:31:14 -- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']' 00:17:07.544 14:31:14 -- common/autotest_common.sh@1257 -- # i=3 00:17:07.544 14:31:14 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:07.802 14:31:14 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.802 14:31:14 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.802 14:31:14 -- common/autotest_common.sh@1265 -- # return 0 00:17:07.802 14:31:14 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:07.802 14:31:14 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:08.061 Malloc3 00:17:08.061 14:31:14 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:08.318 14:31:15 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:08.318 Asynchronous Event Request test 00:17:08.318 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:08.318 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:08.318 Registering asynchronous event callbacks... 00:17:08.318 Starting namespace attribute notice tests for all controllers... 00:17:08.318 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:08.318 aer_cb - Changed Namespace 00:17:08.318 Cleaning up... 00:17:08.577 [ 00:17:08.577 { 00:17:08.577 "allow_any_host": true, 00:17:08.577 "hosts": [], 00:17:08.577 "listen_addresses": [], 00:17:08.577 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:08.577 "subtype": "Discovery" 00:17:08.577 }, 00:17:08.577 { 00:17:08.577 "allow_any_host": true, 00:17:08.577 "hosts": [], 00:17:08.577 "listen_addresses": [ 00:17:08.577 { 00:17:08.577 "adrfam": "IPv4", 00:17:08.577 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:08.577 "transport": "VFIOUSER", 00:17:08.577 "trsvcid": "0", 00:17:08.577 "trtype": "VFIOUSER" 00:17:08.577 } 00:17:08.577 ], 00:17:08.577 "max_cntlid": 65519, 00:17:08.577 "max_namespaces": 32, 00:17:08.577 "min_cntlid": 1, 00:17:08.577 "model_number": "SPDK bdev Controller", 00:17:08.577 "namespaces": [ 00:17:08.577 { 00:17:08.577 "bdev_name": "Malloc1", 00:17:08.577 "name": "Malloc1", 00:17:08.577 "nguid": "0ED3C5A91DAF4571A3A4B9D52896F966", 00:17:08.577 "nsid": 1, 00:17:08.577 "uuid": "0ed3c5a9-1daf-4571-a3a4-b9d52896f966" 00:17:08.577 }, 00:17:08.577 { 00:17:08.577 "bdev_name": "Malloc3", 00:17:08.577 "name": "Malloc3", 00:17:08.577 "nguid": "205246879BB546F892B31071312F0359", 00:17:08.577 "nsid": 2, 00:17:08.577 "uuid": "20524687-9bb5-46f8-92b3-1071312f0359" 00:17:08.577 } 00:17:08.577 ], 00:17:08.577 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:08.577 "serial_number": "SPDK1", 00:17:08.577 "subtype": "NVMe" 00:17:08.577 }, 00:17:08.577 { 00:17:08.577 "allow_any_host": true, 00:17:08.577 "hosts": [], 00:17:08.577 "listen_addresses": [ 00:17:08.577 { 00:17:08.577 "adrfam": "IPv4", 00:17:08.577 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:08.577 "transport": "VFIOUSER", 00:17:08.577 "trsvcid": "0", 00:17:08.577 "trtype": "VFIOUSER" 00:17:08.577 } 00:17:08.577 ], 00:17:08.577 "max_cntlid": 65519, 00:17:08.577 "max_namespaces": 32, 00:17:08.577 "min_cntlid": 1, 00:17:08.577 "model_number": "SPDK bdev Controller", 00:17:08.577 "namespaces": [ 00:17:08.577 { 00:17:08.577 "bdev_name": "Malloc2", 00:17:08.577 "name": "Malloc2", 00:17:08.577 "nguid": "D8194C05D335404EB7F25E1D26EC3A8B", 00:17:08.577 "nsid": 1, 00:17:08.577 "uuid": "d8194c05-d335-404e-b7f2-5e1d26ec3a8b" 00:17:08.577 } 00:17:08.577 ], 00:17:08.577 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:08.577 "serial_number": "SPDK2", 00:17:08.577 "subtype": "NVMe" 00:17:08.577 } 00:17:08.577 ] 00:17:08.577 14:31:15 -- target/nvmf_vfio_user.sh@44 -- # wait 71556 00:17:08.577 14:31:15 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.577 14:31:15 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:08.577 14:31:15 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:08.577 14:31:15 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:08.577 [2024-12-06 14:31:15.448707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.577 [2024-12-06 14:31:15.448752] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71600 ] 00:17:08.837 [2024-12-06 14:31:15.585385] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:08.837 [2024-12-06 14:31:15.592866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:08.837 [2024-12-06 14:31:15.592904] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1d76797000 00:17:08.837 [2024-12-06 14:31:15.593871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.594868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.595880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.596886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.597893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.598894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.599898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.600903] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.837 [2024-12-06 14:31:15.601907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:08.837 [2024-12-06 14:31:15.601938] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1d7678c000 00:17:08.837 [2024-12-06 14:31:15.603190] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:08.837 [2024-12-06 14:31:15.620721] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:08.837 [2024-12-06 14:31:15.620767] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:08.837 [2024-12-06 14:31:15.622855] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:08.837 [2024-12-06 14:31:15.622936] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:08.837 [2024-12-06 14:31:15.623023] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:08.837 [2024-12-06 14:31:15.623052] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:08.837 [2024-12-06 14:31:15.623058] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:08.837 [2024-12-06 14:31:15.624438] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:08.837 [2024-12-06 14:31:15.624482] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:08.837 [2024-12-06 14:31:15.624494] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:08.837 [2024-12-06 14:31:15.624865] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:08.837 [2024-12-06 14:31:15.624891] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:08.837 [2024-12-06 14:31:15.624903] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:08.837 [2024-12-06 14:31:15.625894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:08.837 [2024-12-06 14:31:15.625923] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:08.837 [2024-12-06 14:31:15.626882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:08.837 [2024-12-06 14:31:15.626924] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:08.837 [2024-12-06 14:31:15.626932] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:08.837 [2024-12-06 14:31:15.626942] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:08.837 [2024-12-06 14:31:15.627048] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:08.837 [2024-12-06 14:31:15.627054] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:08.837 [2024-12-06 14:31:15.627059] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:08.837 [2024-12-06 14:31:15.631427] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:08.837 [2024-12-06 14:31:15.631903] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:08.837 [2024-12-06 14:31:15.632914] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:08.837 [2024-12-06 14:31:15.633950] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:08.837 [2024-12-06 14:31:15.634925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:08.837 [2024-12-06 14:31:15.634950] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:08.837 [2024-12-06 14:31:15.634957] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.634980] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:08.837 [2024-12-06 14:31:15.634999] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.635016] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:08.837 [2024-12-06 14:31:15.635022] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.837 [2024-12-06 14:31:15.635037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.837 [2024-12-06 14:31:15.639467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:08.837 [2024-12-06 14:31:15.639495] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:08.837 [2024-12-06 14:31:15.639519] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:08.837 [2024-12-06 14:31:15.639524] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:08.837 [2024-12-06 14:31:15.639529] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:08.837 [2024-12-06 14:31:15.639535] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:08.837 [2024-12-06 14:31:15.639540] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:08.837 [2024-12-06 14:31:15.639546] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.639563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.639577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:08.837 [2024-12-06 14:31:15.648458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:08.837 [2024-12-06 14:31:15.648516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.837 [2024-12-06 14:31:15.648527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.837 [2024-12-06 14:31:15.648536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.837 [2024-12-06 14:31:15.648545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.837 [2024-12-06 14:31:15.648551] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.648564] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.648575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:08.837 [2024-12-06 14:31:15.655449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:08.837 [2024-12-06 14:31:15.655473] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:08.837 [2024-12-06 14:31:15.655496] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.655507] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.655520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.655532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:08.837 [2024-12-06 14:31:15.658442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:08.837 [2024-12-06 14:31:15.658538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.658553] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:08.837 [2024-12-06 14:31:15.658563] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:08.837 [2024-12-06 14:31:15.658570] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:08.837 [2024-12-06 14:31:15.658577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.662460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.662492] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:08.838 [2024-12-06 14:31:15.662506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.662517] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.662526] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:08.838 [2024-12-06 14:31:15.662532] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.838 [2024-12-06 14:31:15.662540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.670435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.670488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.670503] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.670513] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:08.838 [2024-12-06 14:31:15.670519] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.838 [2024-12-06 14:31:15.670527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.672475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.672487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.672500] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.672507] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.672513] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.672518] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:08.838 [2024-12-06 14:31:15.672524] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:08.838 [2024-12-06 14:31:15.672529] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:08.838 [2024-12-06 14:31:15.672550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.676447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.676476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.684447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.684507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.686441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.686488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.692424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.692472] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:08.838 [2024-12-06 14:31:15.692480] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:08.838 [2024-12-06 14:31:15.692484] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:08.838 [2024-12-06 14:31:15.692488] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:08.838 [2024-12-06 14:31:15.692495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:08.838 [2024-12-06 14:31:15.692504] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:08.838 [2024-12-06 14:31:15.692509] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:08.838 [2024-12-06 14:31:15.692515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.692523] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:08.838 [2024-12-06 14:31:15.692528] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.838 [2024-12-06 14:31:15.692535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.692543] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:08.838 [2024-12-06 14:31:15.692548] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:08.838 [2024-12-06 14:31:15.692554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:08.838 [2024-12-06 14:31:15.698444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.698498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.698513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:08.838 [2024-12-06 14:31:15.698522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:08.838 ===================================================== 00:17:08.838 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:08.838 ===================================================== 00:17:08.838 Controller Capabilities/Features 00:17:08.838 ================================ 00:17:08.838 Vendor ID: 4e58 00:17:08.838 Subsystem Vendor ID: 4e58 00:17:08.838 Serial Number: SPDK2 00:17:08.838 Model Number: SPDK bdev Controller 00:17:08.838 Firmware Version: 24.01.1 00:17:08.838 Recommended Arb Burst: 6 00:17:08.838 IEEE OUI Identifier: 8d 6b 50 00:17:08.838 Multi-path I/O 00:17:08.838 May have multiple subsystem ports: Yes 00:17:08.838 May have multiple controllers: Yes 00:17:08.838 Associated with SR-IOV VF: No 00:17:08.838 Max Data Transfer Size: 131072 00:17:08.838 Max Number of Namespaces: 32 00:17:08.838 Max Number of I/O Queues: 127 00:17:08.838 NVMe Specification Version (VS): 1.3 00:17:08.838 NVMe Specification Version (Identify): 1.3 00:17:08.838 Maximum Queue Entries: 256 00:17:08.838 Contiguous Queues Required: Yes 00:17:08.838 Arbitration Mechanisms Supported 00:17:08.838 Weighted Round Robin: Not Supported 00:17:08.838 Vendor Specific: Not Supported 00:17:08.838 Reset Timeout: 15000 ms 00:17:08.838 Doorbell Stride: 4 bytes 00:17:08.838 NVM Subsystem Reset: Not Supported 00:17:08.838 Command Sets Supported 00:17:08.838 NVM Command Set: Supported 00:17:08.838 Boot Partition: Not Supported 00:17:08.838 Memory Page Size Minimum: 4096 bytes 00:17:08.838 Memory Page Size Maximum: 4096 bytes 00:17:08.838 Persistent Memory Region: Not Supported 00:17:08.838 Optional Asynchronous Events Supported 00:17:08.838 Namespace Attribute Notices: Supported 00:17:08.838 Firmware Activation Notices: Not Supported 00:17:08.838 ANA Change Notices: Not Supported 00:17:08.838 PLE Aggregate Log Change Notices: Not Supported 00:17:08.838 LBA Status Info Alert Notices: Not Supported 00:17:08.838 EGE Aggregate Log Change Notices: Not Supported 00:17:08.838 Normal NVM Subsystem Shutdown event: Not Supported 00:17:08.838 Zone Descriptor Change Notices: Not Supported 00:17:08.838 Discovery Log Change Notices: Not Supported 00:17:08.838 Controller Attributes 00:17:08.838 128-bit Host Identifier: Supported 00:17:08.838 Non-Operational Permissive Mode: Not Supported 00:17:08.838 NVM Sets: Not Supported 00:17:08.838 Read Recovery Levels: Not Supported 00:17:08.838 Endurance Groups: Not Supported 00:17:08.838 Predictable Latency Mode: Not Supported 00:17:08.838 Traffic Based Keep ALive: Not Supported 00:17:08.838 Namespace Granularity: Not Supported 00:17:08.838 SQ Associations: Not Supported 00:17:08.838 UUID List: Not Supported 00:17:08.838 Multi-Domain Subsystem: Not Supported 00:17:08.838 Fixed Capacity Management: Not Supported 00:17:08.838 Variable Capacity Management: Not Supported 00:17:08.838 Delete Endurance Group: Not Supported 00:17:08.838 Delete NVM Set: Not Supported 00:17:08.838 Extended LBA Formats Supported: Not Supported 00:17:08.838 Flexible Data Placement Supported: Not Supported 00:17:08.838 00:17:08.838 Controller Memory Buffer Support 00:17:08.838 ================================ 00:17:08.838 Supported: No 00:17:08.838 00:17:08.838 Persistent Memory Region Support 00:17:08.838 ================================ 00:17:08.838 Supported: No 00:17:08.838 00:17:08.838 Admin Command Set Attributes 00:17:08.838 ============================ 00:17:08.838 Security Send/Receive: Not Supported 00:17:08.838 Format NVM: Not Supported 00:17:08.838 Firmware Activate/Download: Not Supported 00:17:08.838 Namespace Management: Not Supported 00:17:08.838 Device Self-Test: Not Supported 00:17:08.839 Directives: Not Supported 00:17:08.839 NVMe-MI: Not Supported 00:17:08.839 Virtualization Management: Not Supported 00:17:08.839 Doorbell Buffer Config: Not Supported 00:17:08.839 Get LBA Status Capability: Not Supported 00:17:08.839 Command & Feature Lockdown Capability: Not Supported 00:17:08.839 Abort Command Limit: 4 00:17:08.839 Async Event Request Limit: 4 00:17:08.839 Number of Firmware Slots: N/A 00:17:08.839 Firmware Slot 1 Read-Only: N/A 00:17:08.839 Firmware Activation Without Reset: N/A 00:17:08.839 Multiple Update Detection Support: N/A 00:17:08.839 Firmware Update Granularity: No Information Provided 00:17:08.839 Per-Namespace SMART Log: No 00:17:08.839 Asymmetric Namespace Access Log Page: Not Supported 00:17:08.839 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:08.839 Command Effects Log Page: Supported 00:17:08.839 Get Log Page Extended Data: Supported 00:17:08.839 Telemetry Log Pages: Not Supported 00:17:08.839 Persistent Event Log Pages: Not Supported 00:17:08.839 Supported Log Pages Log Page: May Support 00:17:08.839 Commands Supported & Effects Log Page: Not Supported 00:17:08.839 Feature Identifiers & Effects Log Page:May Support 00:17:08.839 NVMe-MI Commands & Effects Log Page: May Support 00:17:08.839 Data Area 4 for Telemetry Log: Not Supported 00:17:08.839 Error Log Page Entries Supported: 128 00:17:08.839 Keep Alive: Supported 00:17:08.839 Keep Alive Granularity: 10000 ms 00:17:08.839 00:17:08.839 NVM Command Set Attributes 00:17:08.839 ========================== 00:17:08.839 Submission Queue Entry Size 00:17:08.839 Max: 64 00:17:08.839 Min: 64 00:17:08.839 Completion Queue Entry Size 00:17:08.839 Max: 16 00:17:08.839 Min: 16 00:17:08.839 Number of Namespaces: 32 00:17:08.839 Compare Command: Supported 00:17:08.839 Write Uncorrectable Command: Not Supported 00:17:08.839 Dataset Management Command: Supported 00:17:08.839 Write Zeroes Command: Supported 00:17:08.839 Set Features Save Field: Not Supported 00:17:08.839 Reservations: Not Supported 00:17:08.839 Timestamp: Not Supported 00:17:08.839 Copy: Supported 00:17:08.839 Volatile Write Cache: Present 00:17:08.839 Atomic Write Unit (Normal): 1 00:17:08.839 Atomic Write Unit (PFail): 1 00:17:08.839 Atomic Compare & Write Unit: 1 00:17:08.839 Fused Compare & Write: Supported 00:17:08.839 Scatter-Gather List 00:17:08.839 SGL Command Set: Supported (Dword aligned) 00:17:08.839 SGL Keyed: Not Supported 00:17:08.839 SGL Bit Bucket Descriptor: Not Supported 00:17:08.839 SGL Metadata Pointer: Not Supported 00:17:08.839 Oversized SGL: Not Supported 00:17:08.839 SGL Metadata Address: Not Supported 00:17:08.839 SGL Offset: Not Supported 00:17:08.839 Transport SGL Data Block: Not Supported 00:17:08.839 Replay Protected Memory Block: Not Supported 00:17:08.839 00:17:08.839 Firmware Slot Information 00:17:08.839 ========================= 00:17:08.839 Active slot: 1 00:17:08.839 Slot 1 Firmware Revision: 24.01.1 00:17:08.839 00:17:08.839 00:17:08.839 Commands Supported and Effects 00:17:08.839 ============================== 00:17:08.839 Admin Commands 00:17:08.839 -------------- 00:17:08.839 Get Log Page (02h): Supported 00:17:08.839 Identify (06h): Supported 00:17:08.839 Abort (08h): Supported 00:17:08.839 Set Features (09h): Supported 00:17:08.839 Get Features (0Ah): Supported 00:17:08.839 Asynchronous Event Request (0Ch): Supported 00:17:08.839 Keep Alive (18h): Supported 00:17:08.839 I/O Commands 00:17:08.839 ------------ 00:17:08.839 Flush (00h): Supported LBA-Change 00:17:08.839 Write (01h): Supported LBA-Change 00:17:08.839 Read (02h): Supported 00:17:08.839 Compare (05h): Supported 00:17:08.839 Write Zeroes (08h): Supported LBA-Change 00:17:08.839 Dataset Management (09h): Supported LBA-Change 00:17:08.839 Copy (19h): Supported LBA-Change 00:17:08.839 Unknown (79h): Supported LBA-Change 00:17:08.839 Unknown (7Ah): Supported 00:17:08.839 00:17:08.839 Error Log 00:17:08.839 ========= 00:17:08.839 00:17:08.839 Arbitration 00:17:08.839 =========== 00:17:08.839 Arbitration Burst: 1 00:17:08.839 00:17:08.839 Power Management 00:17:08.839 ================ 00:17:08.839 Number of Power States: 1 00:17:08.839 Current Power State: Power State #0 00:17:08.839 Power State #0: 00:17:08.839 Max Power: 0.00 W 00:17:08.839 Non-Operational State: Operational 00:17:08.839 Entry Latency: Not Reported 00:17:08.839 Exit Latency: Not Reported 00:17:08.839 Relative Read Throughput: 0 00:17:08.839 Relative Read Latency: 0 00:17:08.839 Relative Write Throughput: 0 00:17:08.839 Relative Write Latency: 0 00:17:08.839 Idle Power: Not Reported 00:17:08.839 Active Power: Not Reported 00:17:08.839 Non-Operational Permissive Mode: Not Supported 00:17:08.839 00:17:08.839 Health Information 00:17:08.839 ================== 00:17:08.839 Critical Warnings: 00:17:08.839 Available Spare Space: OK 00:17:08.839 Temperature: OK 00:17:08.839 Device Reliability: OK 00:17:08.839 Read Only: No 00:17:08.839 Volatile Memory Backup: OK 00:17:08.839 Current Temperature: 0 Kelvin[2024-12-06 14:31:15.698639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:08.839 [2024-12-06 14:31:15.706438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:08.839 [2024-12-06 14:31:15.706508] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:08.839 [2024-12-06 14:31:15.706522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.839 [2024-12-06 14:31:15.706529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.839 [2024-12-06 14:31:15.706537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.839 [2024-12-06 14:31:15.706544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.839 [2024-12-06 14:31:15.706625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:08.839 [2024-12-06 14:31:15.706643] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:08.839 [2024-12-06 14:31:15.707672] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:08.839 [2024-12-06 14:31:15.707693] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:08.839 [2024-12-06 14:31:15.708631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:08.839 [2024-12-06 14:31:15.708662] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:08.839 [2024-12-06 14:31:15.708720] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:08.839 [2024-12-06 14:31:15.710041] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:08.839 (-273 Celsius) 00:17:08.839 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:08.839 Available Spare: 0% 00:17:08.839 Available Spare Threshold: 0% 00:17:08.839 Life Percentage Used: 0% 00:17:08.839 Data Units Read: 0 00:17:08.839 Data Units Written: 0 00:17:08.839 Host Read Commands: 0 00:17:08.839 Host Write Commands: 0 00:17:08.839 Controller Busy Time: 0 minutes 00:17:08.839 Power Cycles: 0 00:17:08.839 Power On Hours: 0 hours 00:17:08.839 Unsafe Shutdowns: 0 00:17:08.839 Unrecoverable Media Errors: 0 00:17:08.839 Lifetime Error Log Entries: 0 00:17:08.839 Warning Temperature Time: 0 minutes 00:17:08.839 Critical Temperature Time: 0 minutes 00:17:08.839 00:17:08.839 Number of Queues 00:17:08.839 ================ 00:17:08.839 Number of I/O Submission Queues: 127 00:17:08.839 Number of I/O Completion Queues: 127 00:17:08.839 00:17:08.839 Active Namespaces 00:17:08.839 ================= 00:17:08.839 Namespace ID:1 00:17:08.839 Error Recovery Timeout: Unlimited 00:17:08.839 Command Set Identifier: NVM (00h) 00:17:08.839 Deallocate: Supported 00:17:08.839 Deallocated/Unwritten Error: Not Supported 00:17:08.839 Deallocated Read Value: Unknown 00:17:08.839 Deallocate in Write Zeroes: Not Supported 00:17:08.839 Deallocated Guard Field: 0xFFFF 00:17:08.839 Flush: Supported 00:17:08.839 Reservation: Supported 00:17:08.839 Namespace Sharing Capabilities: Multiple Controllers 00:17:08.839 Size (in LBAs): 131072 (0GiB) 00:17:08.839 Capacity (in LBAs): 131072 (0GiB) 00:17:08.839 Utilization (in LBAs): 131072 (0GiB) 00:17:08.839 NGUID: D8194C05D335404EB7F25E1D26EC3A8B 00:17:08.839 UUID: d8194c05-d335-404e-b7f2-5e1d26ec3a8b 00:17:08.839 Thin Provisioning: Not Supported 00:17:08.839 Per-NS Atomic Units: Yes 00:17:08.839 Atomic Boundary Size (Normal): 0 00:17:08.839 Atomic Boundary Size (PFail): 0 00:17:08.839 Atomic Boundary Offset: 0 00:17:08.839 Maximum Single Source Range Length: 65535 00:17:08.840 Maximum Copy Length: 65535 00:17:08.840 Maximum Source Range Count: 1 00:17:08.840 NGUID/EUI64 Never Reused: No 00:17:08.840 Namespace Write Protected: No 00:17:08.840 Number of LBA Formats: 1 00:17:08.840 Current LBA Format: LBA Format #00 00:17:08.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:08.840 00:17:08.840 14:31:15 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:15.397 Initializing NVMe Controllers 00:17:15.397 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:15.397 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:15.397 Initialization complete. Launching workers. 00:17:15.397 ======================================================== 00:17:15.397 Latency(us) 00:17:15.397 Device Information : IOPS MiB/s Average min max 00:17:15.397 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34223.41 133.69 3739.40 1110.56 10446.26 00:17:15.397 ======================================================== 00:17:15.397 Total : 34223.41 133.69 3739.40 1110.56 10446.26 00:17:15.397 00:17:15.397 14:31:21 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:19.587 Initializing NVMe Controllers 00:17:19.587 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:19.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:19.587 Initialization complete. Launching workers. 00:17:19.587 ======================================================== 00:17:19.587 Latency(us) 00:17:19.587 Device Information : IOPS MiB/s Average min max 00:17:19.587 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35151.08 137.31 3640.77 1096.64 10680.61 00:17:19.587 ======================================================== 00:17:19.587 Total : 35151.08 137.31 3640.77 1096.64 10680.61 00:17:19.587 00:17:19.587 14:31:26 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:26.153 Initializing NVMe Controllers 00:17:26.153 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:26.153 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:26.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:26.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:26.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:26.153 Initialization complete. Launching workers. 00:17:26.153 Starting thread on core 2 00:17:26.153 Starting thread on core 3 00:17:26.153 Starting thread on core 1 00:17:26.153 14:31:31 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:28.678 Initializing NVMe Controllers 00:17:28.678 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.678 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.678 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:28.678 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:28.678 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:28.678 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:28.678 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:28.678 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:28.678 Initialization complete. Launching workers. 00:17:28.678 Starting thread on core 1 with urgent priority queue 00:17:28.678 Starting thread on core 2 with urgent priority queue 00:17:28.678 Starting thread on core 3 with urgent priority queue 00:17:28.678 Starting thread on core 0 with urgent priority queue 00:17:28.678 SPDK bdev Controller (SPDK2 ) core 0: 7246.00 IO/s 13.80 secs/100000 ios 00:17:28.678 SPDK bdev Controller (SPDK2 ) core 1: 7138.67 IO/s 14.01 secs/100000 ios 00:17:28.678 SPDK bdev Controller (SPDK2 ) core 2: 6848.67 IO/s 14.60 secs/100000 ios 00:17:28.678 SPDK bdev Controller (SPDK2 ) core 3: 6563.67 IO/s 15.24 secs/100000 ios 00:17:28.678 ======================================================== 00:17:28.678 00:17:28.678 14:31:35 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:28.936 Initializing NVMe Controllers 00:17:28.936 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.936 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.936 Namespace ID: 1 size: 0GB 00:17:28.936 Initialization complete. 00:17:28.936 INFO: using host memory buffer for IO 00:17:28.936 Hello world! 00:17:28.936 14:31:35 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:30.312 Initializing NVMe Controllers 00:17:30.312 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:30.312 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:30.312 Initialization complete. Launching workers. 00:17:30.312 submit (in ns) avg, min, max = 5919.4, 3511.8, 4029954.1 00:17:30.312 complete (in ns) avg, min, max = 28614.8, 2049.1, 7033460.9 00:17:30.312 00:17:30.312 Submit histogram 00:17:30.312 ================ 00:17:30.312 Range in us Cumulative Count 00:17:30.312 3.505 - 3.520: 0.0577% ( 8) 00:17:30.312 3.520 - 3.535: 0.3459% ( 40) 00:17:30.312 3.535 - 3.549: 1.2036% ( 119) 00:17:30.312 3.549 - 3.564: 1.7874% ( 81) 00:17:30.313 3.564 - 3.578: 2.0252% ( 33) 00:17:30.313 3.578 - 3.593: 2.3063% ( 39) 00:17:30.313 3.593 - 3.607: 2.4937% ( 26) 00:17:30.313 3.607 - 3.622: 2.7243% ( 32) 00:17:30.313 3.622 - 3.636: 4.2667% ( 214) 00:17:30.313 3.636 - 3.651: 6.9045% ( 366) 00:17:30.313 3.651 - 3.665: 8.5838% ( 233) 00:17:30.313 3.665 - 3.680: 9.8883% ( 181) 00:17:30.313 3.680 - 3.695: 12.5189% ( 365) 00:17:30.313 3.695 - 3.709: 14.6739% ( 299) 00:17:30.313 3.709 - 3.724: 16.0072% ( 185) 00:17:30.313 3.724 - 3.753: 20.6126% ( 639) 00:17:30.313 3.753 - 3.782: 25.9027% ( 734) 00:17:30.313 3.782 - 3.811: 46.2991% ( 2830) 00:17:30.313 3.811 - 3.840: 63.8703% ( 2438) 00:17:30.313 3.840 - 3.869: 76.1874% ( 1709) 00:17:30.313 3.869 - 3.898: 81.6216% ( 754) 00:17:30.313 3.898 - 3.927: 84.3820% ( 383) 00:17:30.313 3.927 - 3.956: 86.0901% ( 237) 00:17:30.313 3.956 - 3.985: 87.3514% ( 175) 00:17:30.313 3.985 - 4.015: 88.4829% ( 157) 00:17:30.313 4.015 - 4.044: 89.5063% ( 142) 00:17:30.313 4.044 - 4.073: 90.2703% ( 106) 00:17:30.313 4.073 - 4.102: 91.1279% ( 119) 00:17:30.313 4.102 - 4.131: 92.6559% ( 212) 00:17:30.313 4.131 - 4.160: 95.0559% ( 333) 00:17:30.313 4.160 - 4.189: 96.6775% ( 225) 00:17:30.313 4.189 - 4.218: 97.5928% ( 127) 00:17:30.313 4.218 - 4.247: 98.0108% ( 58) 00:17:30.313 4.247 - 4.276: 98.1766% ( 23) 00:17:30.313 4.276 - 4.305: 98.3423% ( 23) 00:17:30.313 4.305 - 4.335: 98.3928% ( 7) 00:17:30.313 4.335 - 4.364: 98.4505% ( 8) 00:17:30.313 4.364 - 4.393: 98.4721% ( 3) 00:17:30.313 4.393 - 4.422: 98.4937% ( 3) 00:17:30.313 4.422 - 4.451: 98.5009% ( 1) 00:17:30.313 4.451 - 4.480: 98.5081% ( 1) 00:17:30.313 4.538 - 4.567: 98.5369% ( 4) 00:17:30.313 4.567 - 4.596: 98.5514% ( 2) 00:17:30.313 4.596 - 4.625: 98.5586% ( 1) 00:17:30.313 4.655 - 4.684: 98.5658% ( 1) 00:17:30.313 4.684 - 4.713: 98.5802% ( 2) 00:17:30.313 4.713 - 4.742: 98.6162% ( 5) 00:17:30.313 4.742 - 4.771: 98.6378% ( 3) 00:17:30.313 4.771 - 4.800: 98.6667% ( 4) 00:17:30.313 4.800 - 4.829: 98.7171% ( 7) 00:17:30.313 4.829 - 4.858: 98.8252% ( 15) 00:17:30.313 4.858 - 4.887: 98.8685% ( 6) 00:17:30.313 4.887 - 4.916: 98.9045% ( 5) 00:17:30.313 4.916 - 4.945: 98.9910% ( 12) 00:17:30.313 4.945 - 4.975: 99.0559% ( 9) 00:17:30.313 4.975 - 5.004: 99.1351% ( 11) 00:17:30.313 5.004 - 5.033: 99.1784% ( 6) 00:17:30.313 5.033 - 5.062: 99.2216% ( 6) 00:17:30.313 5.062 - 5.091: 99.2649% ( 6) 00:17:30.313 5.091 - 5.120: 99.2721% ( 1) 00:17:30.313 5.120 - 5.149: 99.2937% ( 3) 00:17:30.313 5.149 - 5.178: 99.3009% ( 1) 00:17:30.313 5.178 - 5.207: 99.3081% ( 1) 00:17:30.313 5.207 - 5.236: 99.3153% ( 1) 00:17:30.313 5.236 - 5.265: 99.3297% ( 2) 00:17:30.313 5.295 - 5.324: 99.3441% ( 2) 00:17:30.313 5.353 - 5.382: 99.3658% ( 3) 00:17:30.313 5.411 - 5.440: 99.3730% ( 1) 00:17:30.313 5.673 - 5.702: 99.3802% ( 1) 00:17:30.313 5.847 - 5.876: 99.3874% ( 1) 00:17:30.313 5.876 - 5.905: 99.3946% ( 1) 00:17:30.313 9.193 - 9.251: 99.4162% ( 3) 00:17:30.313 9.309 - 9.367: 99.4234% ( 1) 00:17:30.313 9.425 - 9.484: 99.4306% ( 1) 00:17:30.313 9.484 - 9.542: 99.4450% ( 2) 00:17:30.313 9.600 - 9.658: 99.4523% ( 1) 00:17:30.313 9.716 - 9.775: 99.4667% ( 2) 00:17:30.313 9.775 - 9.833: 99.4739% ( 1) 00:17:30.313 9.833 - 9.891: 99.4955% ( 3) 00:17:30.313 9.891 - 9.949: 99.5099% ( 2) 00:17:30.313 10.007 - 10.065: 99.5459% ( 5) 00:17:30.313 10.065 - 10.124: 99.5604% ( 2) 00:17:30.313 10.124 - 10.182: 99.5676% ( 1) 00:17:30.313 10.182 - 10.240: 99.5820% ( 2) 00:17:30.313 10.240 - 10.298: 99.5892% ( 1) 00:17:30.313 10.356 - 10.415: 99.5964% ( 1) 00:17:30.313 10.589 - 10.647: 99.6036% ( 1) 00:17:30.313 10.822 - 10.880: 99.6108% ( 1) 00:17:30.313 10.880 - 10.938: 99.6180% ( 1) 00:17:30.313 10.938 - 10.996: 99.6324% ( 2) 00:17:30.313 10.996 - 11.055: 99.6541% ( 3) 00:17:30.313 11.171 - 11.229: 99.6685% ( 2) 00:17:30.313 11.287 - 11.345: 99.6901% ( 3) 00:17:30.313 11.811 - 11.869: 99.6973% ( 1) 00:17:30.313 12.044 - 12.102: 99.7045% ( 1) 00:17:30.313 12.102 - 12.160: 99.7117% ( 1) 00:17:30.313 12.160 - 12.218: 99.7189% ( 1) 00:17:30.313 12.335 - 12.393: 99.7261% ( 1) 00:17:30.313 12.625 - 12.684: 99.7333% ( 1) 00:17:30.313 12.858 - 12.916: 99.7405% ( 1) 00:17:30.313 13.498 - 13.556: 99.7477% ( 1) 00:17:30.313 16.407 - 16.524: 99.7550% ( 1) 00:17:30.313 16.873 - 16.989: 99.7622% ( 1) 00:17:30.313 18.036 - 18.153: 99.7694% ( 1) 00:17:30.313 18.153 - 18.269: 99.7838% ( 2) 00:17:30.313 18.269 - 18.385: 99.7982% ( 2) 00:17:30.313 18.385 - 18.502: 99.8270% ( 4) 00:17:30.313 18.618 - 18.735: 99.8414% ( 2) 00:17:30.313 18.735 - 18.851: 99.8631% ( 3) 00:17:30.313 18.851 - 18.967: 99.8847% ( 3) 00:17:30.313 19.084 - 19.200: 99.8991% ( 2) 00:17:30.313 19.200 - 19.316: 99.9063% ( 1) 00:17:30.313 19.433 - 19.549: 99.9135% ( 1) 00:17:30.313 19.665 - 19.782: 99.9279% ( 2) 00:17:30.313 20.131 - 20.247: 99.9495% ( 3) 00:17:30.313 3991.738 - 4021.527: 99.9928% ( 6) 00:17:30.313 4021.527 - 4051.316: 100.0000% ( 1) 00:17:30.313 00:17:30.313 Complete histogram 00:17:30.313 ================== 00:17:30.313 Range in us Cumulative Count 00:17:30.313 2.036 - 2.051: 0.0072% ( 1) 00:17:30.313 2.051 - 2.065: 0.7784% ( 107) 00:17:30.313 2.065 - 2.080: 2.5874% ( 251) 00:17:30.313 2.080 - 2.095: 3.0054% ( 58) 00:17:30.313 2.095 - 2.109: 3.0703% ( 9) 00:17:30.313 2.109 - 2.124: 3.0991% ( 4) 00:17:30.313 2.124 - 2.138: 10.0541% ( 965) 00:17:30.313 2.138 - 2.153: 20.3604% ( 1430) 00:17:30.313 2.153 - 2.167: 21.8306% ( 204) 00:17:30.313 2.167 - 2.182: 22.0036% ( 24) 00:17:30.313 2.182 - 2.196: 22.1622% ( 22) 00:17:30.313 2.196 - 2.211: 38.2198% ( 2228) 00:17:30.313 2.211 - 2.225: 87.1207% ( 6785) 00:17:30.313 2.225 - 2.240: 95.2288% ( 1125) 00:17:30.313 2.240 - 2.255: 96.0937% ( 120) 00:17:30.313 2.255 - 2.269: 96.9802% ( 123) 00:17:30.313 2.269 - 2.284: 97.5423% ( 78) 00:17:30.313 2.284 - 2.298: 97.9748% ( 60) 00:17:30.313 2.298 - 2.313: 98.3063% ( 46) 00:17:30.313 2.313 - 2.327: 98.5441% ( 33) 00:17:30.313 2.327 - 2.342: 98.6378% ( 13) 00:17:30.313 2.342 - 2.356: 98.7171% ( 11) 00:17:30.313 2.356 - 2.371: 98.7387% ( 3) 00:17:30.313 2.371 - 2.385: 98.7532% ( 2) 00:17:30.313 2.385 - 2.400: 98.7604% ( 1) 00:17:30.313 2.458 - 2.473: 98.7676% ( 1) 00:17:30.313 2.487 - 2.502: 98.7748% ( 1) 00:17:30.313 2.516 - 2.531: 98.7892% ( 2) 00:17:30.313 2.531 - 2.545: 98.7964% ( 1) 00:17:30.313 2.545 - 2.560: 98.8036% ( 1) 00:17:30.313 2.560 - 2.575: 98.8324% ( 4) 00:17:30.313 3.564 - 3.578: 98.8396% ( 1) 00:17:30.313 3.607 - 3.622: 98.8468% ( 1) 00:17:30.313 3.724 - 3.753: 98.8757% ( 4) 00:17:30.313 3.840 - 3.869: 98.8829% ( 1) 00:17:30.313 3.869 - 3.898: 98.8901% ( 1) 00:17:30.313 3.898 - 3.927: 98.8973% ( 1) 00:17:30.313 3.927 - 3.956: 98.9117% ( 2) 00:17:30.313 3.956 - 3.985: 98.9261% ( 2) 00:17:30.313 3.985 - 4.015: 98.9333% ( 1) 00:17:30.313 4.015 - 4.044: 98.9405% ( 1) 00:17:30.313 4.073 - 4.102: 98.9550% ( 2) 00:17:30.313 4.131 - 4.160: 98.9694% ( 2) 00:17:30.313 4.160 - 4.189: 98.9766% ( 1) 00:17:30.313 4.218 - 4.247: 98.9838% ( 1) 00:17:30.313 4.247 - 4.276: 98.9910% ( 1) 00:17:30.313 4.276 - 4.305: 98.9982% ( 1) 00:17:30.313 4.364 - 4.393: 99.0054% ( 1) 00:17:30.313 4.538 - 4.567: 99.0126% ( 1) 00:17:30.313 4.625 - 4.655: 99.0198% ( 1) 00:17:30.313 4.655 - 4.684: 99.0270% ( 1) 00:17:30.313 4.858 - 4.887: 99.0342% ( 1) 00:17:30.313 6.051 - 6.080: 99.0414% ( 1) 00:17:30.313 7.738 - 7.796: 99.0486% ( 1) 00:17:30.313 7.796 - 7.855: 99.0559% ( 1) 00:17:30.313 7.855 - 7.913: 99.0631% ( 1) 00:17:30.313 8.204 - 8.262: 99.0775% ( 2) 00:17:30.313 8.262 - 8.320: 99.0847% ( 1) 00:17:30.313 8.320 - 8.378: 99.0919% ( 1) 00:17:30.313 8.378 - 8.436: 99.0991% ( 1) 00:17:30.313 8.436 - 8.495: 99.1063% ( 1) 00:17:30.313 8.495 - 8.553: 99.1135% ( 1) 00:17:30.313 8.553 - 8.611: 99.1207% ( 1) 00:17:30.313 8.669 - 8.727: 99.1279% ( 1) 00:17:30.313 8.785 - 8.844: 99.1351% ( 1) 00:17:30.314 8.844 - 8.902: 99.1568% ( 3) 00:17:30.314 8.960 - 9.018: 99.1712% ( 2) 00:17:30.314 9.018 - 9.076: 99.1784% ( 1) 00:17:30.314 9.193 - 9.251: 99.1856% ( 1) 00:17:30.314 9.309 - 9.367: 99.1928% ( 1) 00:17:30.314 9.425 - 9.484: 99.2000% ( 1) 00:17:30.314 9.484 - 9.542: 99.2072% ( 1) 00:17:30.314 9.658 - 9.716: 99.2144% ( 1) 00:17:30.314 10.996 - 11.055: 99.2216% ( 1) 00:17:30.314 11.927 - 11.985: 99.2288% ( 1) 00:17:30.314 15.127 - 15.244: 99.2360% ( 1) 00:17:30.314 15.244 - 15.360: 99.2432% ( 1) 00:17:30.314 16.175 - 16.291: 99.2505% ( 1) 00:17:30.314 16.640 - 16.756: 99.2649% ( 2) 00:17:30.314 16.756 - 16.873: 99.2721% ( 1) 00:17:30.314 17.222 - 17.338: 99.2865% ( 2) 00:17:30.314 17.338 - 17.455: 99.3009% ( 2) 00:17:30.314 17.571 - 17.687: 99.3081% ( 1) 00:17:30.314 17.687 - 17.804: 99.3153% ( 1) 00:17:30.314 17.804 - 17.920: 99.3225% ( 1) 00:17:30.314 17.920 - 18.036: 99.3297% ( 1) 00:17:30.314 18.036 - 18.153: 99.3369% ( 1) 00:17:30.314 18.502 - 18.618: 99.3441% ( 1) 00:17:30.314 3038.487 - 3053.382: 99.3586% ( 2) 00:17:30.314 3991.738 - 4021.527: 99.9640% ( 84) 00:17:30.314 4021.527 - 4051.316: 99.9928% ( 4) 00:17:30.314 7030.225 - 7060.015: 100.0000% ( 1) 00:17:30.314 00:17:30.314 14:31:37 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:30.314 14:31:37 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:30.314 14:31:37 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:30.314 14:31:37 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:30.314 14:31:37 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:30.573 [ 00:17:30.573 { 00:17:30.573 "allow_any_host": true, 00:17:30.573 "hosts": [], 00:17:30.573 "listen_addresses": [], 00:17:30.573 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:30.573 "subtype": "Discovery" 00:17:30.573 }, 00:17:30.573 { 00:17:30.573 "allow_any_host": true, 00:17:30.573 "hosts": [], 00:17:30.573 "listen_addresses": [ 00:17:30.573 { 00:17:30.573 "adrfam": "IPv4", 00:17:30.573 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:30.573 "transport": "VFIOUSER", 00:17:30.573 "trsvcid": "0", 00:17:30.573 "trtype": "VFIOUSER" 00:17:30.573 } 00:17:30.573 ], 00:17:30.573 "max_cntlid": 65519, 00:17:30.573 "max_namespaces": 32, 00:17:30.573 "min_cntlid": 1, 00:17:30.573 "model_number": "SPDK bdev Controller", 00:17:30.573 "namespaces": [ 00:17:30.573 { 00:17:30.573 "bdev_name": "Malloc1", 00:17:30.573 "name": "Malloc1", 00:17:30.573 "nguid": "0ED3C5A91DAF4571A3A4B9D52896F966", 00:17:30.573 "nsid": 1, 00:17:30.573 "uuid": "0ed3c5a9-1daf-4571-a3a4-b9d52896f966" 00:17:30.573 }, 00:17:30.573 { 00:17:30.573 "bdev_name": "Malloc3", 00:17:30.573 "name": "Malloc3", 00:17:30.573 "nguid": "205246879BB546F892B31071312F0359", 00:17:30.573 "nsid": 2, 00:17:30.573 "uuid": "20524687-9bb5-46f8-92b3-1071312f0359" 00:17:30.573 } 00:17:30.573 ], 00:17:30.573 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:30.573 "serial_number": "SPDK1", 00:17:30.573 "subtype": "NVMe" 00:17:30.573 }, 00:17:30.573 { 00:17:30.573 "allow_any_host": true, 00:17:30.573 "hosts": [], 00:17:30.573 "listen_addresses": [ 00:17:30.573 { 00:17:30.573 "adrfam": "IPv4", 00:17:30.573 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:30.573 "transport": "VFIOUSER", 00:17:30.573 "trsvcid": "0", 00:17:30.573 "trtype": "VFIOUSER" 00:17:30.573 } 00:17:30.573 ], 00:17:30.573 "max_cntlid": 65519, 00:17:30.573 "max_namespaces": 32, 00:17:30.573 "min_cntlid": 1, 00:17:30.573 "model_number": "SPDK bdev Controller", 00:17:30.573 "namespaces": [ 00:17:30.573 { 00:17:30.573 "bdev_name": "Malloc2", 00:17:30.573 "name": "Malloc2", 00:17:30.573 "nguid": "D8194C05D335404EB7F25E1D26EC3A8B", 00:17:30.573 "nsid": 1, 00:17:30.573 "uuid": "d8194c05-d335-404e-b7f2-5e1d26ec3a8b" 00:17:30.573 } 00:17:30.573 ], 00:17:30.573 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:30.573 "serial_number": "SPDK2", 00:17:30.573 "subtype": "NVMe" 00:17:30.573 } 00:17:30.573 ] 00:17:30.573 14:31:37 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:30.573 14:31:37 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71852 00:17:30.573 14:31:37 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:30.573 14:31:37 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:30.573 14:31:37 -- common/autotest_common.sh@1254 -- # local i=0 00:17:30.573 14:31:37 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.573 14:31:37 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:17:30.573 14:31:37 -- common/autotest_common.sh@1257 -- # i=1 00:17:30.573 14:31:37 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:30.573 14:31:37 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.573 14:31:37 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:17:30.573 14:31:37 -- common/autotest_common.sh@1257 -- # i=2 00:17:30.573 14:31:37 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:17:30.832 14:31:37 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.832 14:31:37 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.832 14:31:37 -- common/autotest_common.sh@1265 -- # return 0 00:17:30.832 14:31:37 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:30.832 14:31:37 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:31.090 Malloc4 00:17:31.090 14:31:37 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:31.368 14:31:38 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:31.368 Asynchronous Event Request test 00:17:31.368 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.368 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.368 Registering asynchronous event callbacks... 00:17:31.368 Starting namespace attribute notice tests for all controllers... 00:17:31.368 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:31.368 aer_cb - Changed Namespace 00:17:31.368 Cleaning up... 00:17:31.627 [ 00:17:31.627 { 00:17:31.627 "allow_any_host": true, 00:17:31.627 "hosts": [], 00:17:31.627 "listen_addresses": [], 00:17:31.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:31.627 "subtype": "Discovery" 00:17:31.627 }, 00:17:31.627 { 00:17:31.627 "allow_any_host": true, 00:17:31.627 "hosts": [], 00:17:31.627 "listen_addresses": [ 00:17:31.627 { 00:17:31.627 "adrfam": "IPv4", 00:17:31.627 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:31.627 "transport": "VFIOUSER", 00:17:31.627 "trsvcid": "0", 00:17:31.627 "trtype": "VFIOUSER" 00:17:31.627 } 00:17:31.627 ], 00:17:31.627 "max_cntlid": 65519, 00:17:31.627 "max_namespaces": 32, 00:17:31.627 "min_cntlid": 1, 00:17:31.627 "model_number": "SPDK bdev Controller", 00:17:31.627 "namespaces": [ 00:17:31.627 { 00:17:31.627 "bdev_name": "Malloc1", 00:17:31.627 "name": "Malloc1", 00:17:31.627 "nguid": "0ED3C5A91DAF4571A3A4B9D52896F966", 00:17:31.627 "nsid": 1, 00:17:31.627 "uuid": "0ed3c5a9-1daf-4571-a3a4-b9d52896f966" 00:17:31.627 }, 00:17:31.627 { 00:17:31.627 "bdev_name": "Malloc3", 00:17:31.627 "name": "Malloc3", 00:17:31.627 "nguid": "205246879BB546F892B31071312F0359", 00:17:31.627 "nsid": 2, 00:17:31.627 "uuid": "20524687-9bb5-46f8-92b3-1071312f0359" 00:17:31.627 } 00:17:31.627 ], 00:17:31.627 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:31.627 "serial_number": "SPDK1", 00:17:31.627 "subtype": "NVMe" 00:17:31.627 }, 00:17:31.627 { 00:17:31.627 "allow_any_host": true, 00:17:31.627 "hosts": [], 00:17:31.627 "listen_addresses": [ 00:17:31.627 { 00:17:31.627 "adrfam": "IPv4", 00:17:31.627 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:31.627 "transport": "VFIOUSER", 00:17:31.627 "trsvcid": "0", 00:17:31.627 "trtype": "VFIOUSER" 00:17:31.627 } 00:17:31.627 ], 00:17:31.627 "max_cntlid": 65519, 00:17:31.627 "max_namespaces": 32, 00:17:31.627 "min_cntlid": 1, 00:17:31.627 "model_number": "SPDK bdev Controller", 00:17:31.627 "namespaces": [ 00:17:31.627 { 00:17:31.627 "bdev_name": "Malloc2", 00:17:31.627 "name": "Malloc2", 00:17:31.627 "nguid": "D8194C05D335404EB7F25E1D26EC3A8B", 00:17:31.627 "nsid": 1, 00:17:31.627 "uuid": "d8194c05-d335-404e-b7f2-5e1d26ec3a8b" 00:17:31.627 }, 00:17:31.627 { 00:17:31.627 "bdev_name": "Malloc4", 00:17:31.627 "name": "Malloc4", 00:17:31.627 "nguid": "E6A43FA39660416AAC3A86F763A5A067", 00:17:31.627 "nsid": 2, 00:17:31.627 "uuid": "e6a43fa3-9660-416a-ac3a-86f763a5a067" 00:17:31.627 } 00:17:31.627 ], 00:17:31.627 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:31.627 "serial_number": "SPDK2", 00:17:31.627 "subtype": "NVMe" 00:17:31.627 } 00:17:31.627 ] 00:17:31.627 14:31:38 -- target/nvmf_vfio_user.sh@44 -- # wait 71852 00:17:31.627 14:31:38 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:31.627 14:31:38 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71168 00:17:31.627 14:31:38 -- common/autotest_common.sh@936 -- # '[' -z 71168 ']' 00:17:31.627 14:31:38 -- common/autotest_common.sh@940 -- # kill -0 71168 00:17:31.627 14:31:38 -- common/autotest_common.sh@941 -- # uname 00:17:31.627 14:31:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.627 14:31:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71168 00:17:31.627 14:31:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.627 14:31:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.627 14:31:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71168' 00:17:31.627 killing process with pid 71168 00:17:31.627 14:31:38 -- common/autotest_common.sh@955 -- # kill 71168 00:17:31.627 [2024-12-06 14:31:38.467605] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:31.627 14:31:38 -- common/autotest_common.sh@960 -- # wait 71168 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71899 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:31.885 Process pid: 71899 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71899' 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:31.885 14:31:38 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71899 00:17:31.885 14:31:38 -- common/autotest_common.sh@829 -- # '[' -z 71899 ']' 00:17:31.885 14:31:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.885 14:31:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.886 14:31:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.886 14:31:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.886 14:31:38 -- common/autotest_common.sh@10 -- # set +x 00:17:32.144 [2024-12-06 14:31:38.877077] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:32.144 [2024-12-06 14:31:38.878228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:32.144 [2024-12-06 14:31:38.878310] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.144 [2024-12-06 14:31:39.013549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.403 [2024-12-06 14:31:39.126017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:32.403 [2024-12-06 14:31:39.126178] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.404 [2024-12-06 14:31:39.126191] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.404 [2024-12-06 14:31:39.126200] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.404 [2024-12-06 14:31:39.126367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.404 [2024-12-06 14:31:39.127120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.404 [2024-12-06 14:31:39.127275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.404 [2024-12-06 14:31:39.127283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.404 [2024-12-06 14:31:39.217845] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:17:32.404 [2024-12-06 14:31:39.225577] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:17:32.404 [2024-12-06 14:31:39.225781] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:17:32.404 [2024-12-06 14:31:39.226398] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:32.404 [2024-12-06 14:31:39.226553] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:17:32.971 14:31:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.971 14:31:39 -- common/autotest_common.sh@862 -- # return 0 00:17:32.971 14:31:39 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:33.907 14:31:40 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:34.166 14:31:41 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:34.166 14:31:41 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:34.166 14:31:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:34.166 14:31:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:34.166 14:31:41 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:34.424 Malloc1 00:17:34.424 14:31:41 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:34.993 14:31:41 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:34.993 14:31:41 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:35.251 14:31:42 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:35.251 14:31:42 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:35.251 14:31:42 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:35.510 Malloc2 00:17:35.510 14:31:42 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:35.769 14:31:42 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:36.118 14:31:42 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:36.379 14:31:43 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:36.379 14:31:43 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71899 00:17:36.379 14:31:43 -- common/autotest_common.sh@936 -- # '[' -z 71899 ']' 00:17:36.379 14:31:43 -- common/autotest_common.sh@940 -- # kill -0 71899 00:17:36.379 14:31:43 -- common/autotest_common.sh@941 -- # uname 00:17:36.379 14:31:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.379 14:31:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71899 00:17:36.379 killing process with pid 71899 00:17:36.379 14:31:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:36.379 14:31:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:36.379 14:31:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71899' 00:17:36.379 14:31:43 -- common/autotest_common.sh@955 -- # kill 71899 00:17:36.379 14:31:43 -- common/autotest_common.sh@960 -- # wait 71899 00:17:36.638 14:31:43 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:36.638 14:31:43 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:36.638 00:17:36.638 real 0m55.789s 00:17:36.638 user 3m38.931s 00:17:36.638 sys 0m4.476s 00:17:36.638 14:31:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:36.638 14:31:43 -- common/autotest_common.sh@10 -- # set +x 00:17:36.638 ************************************ 00:17:36.638 END TEST nvmf_vfio_user 00:17:36.638 ************************************ 00:17:36.638 14:31:43 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:36.638 14:31:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:36.638 14:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.638 14:31:43 -- common/autotest_common.sh@10 -- # set +x 00:17:36.638 ************************************ 00:17:36.638 START TEST nvmf_vfio_user_nvme_compliance 00:17:36.638 ************************************ 00:17:36.638 14:31:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:36.897 * Looking for test storage... 00:17:36.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:17:36.897 14:31:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:36.897 14:31:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:36.897 14:31:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:36.897 14:31:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:36.897 14:31:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:36.897 14:31:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:36.897 14:31:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:36.897 14:31:43 -- scripts/common.sh@335 -- # IFS=.-: 00:17:36.897 14:31:43 -- scripts/common.sh@335 -- # read -ra ver1 00:17:36.897 14:31:43 -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.897 14:31:43 -- scripts/common.sh@336 -- # read -ra ver2 00:17:36.897 14:31:43 -- scripts/common.sh@337 -- # local 'op=<' 00:17:36.897 14:31:43 -- scripts/common.sh@339 -- # ver1_l=2 00:17:36.897 14:31:43 -- scripts/common.sh@340 -- # ver2_l=1 00:17:36.897 14:31:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:36.897 14:31:43 -- scripts/common.sh@343 -- # case "$op" in 00:17:36.897 14:31:43 -- scripts/common.sh@344 -- # : 1 00:17:36.897 14:31:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:36.897 14:31:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.897 14:31:43 -- scripts/common.sh@364 -- # decimal 1 00:17:36.897 14:31:43 -- scripts/common.sh@352 -- # local d=1 00:17:36.897 14:31:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.897 14:31:43 -- scripts/common.sh@354 -- # echo 1 00:17:36.897 14:31:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:36.897 14:31:43 -- scripts/common.sh@365 -- # decimal 2 00:17:36.897 14:31:43 -- scripts/common.sh@352 -- # local d=2 00:17:36.897 14:31:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.897 14:31:43 -- scripts/common.sh@354 -- # echo 2 00:17:36.897 14:31:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:36.897 14:31:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:36.897 14:31:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:36.897 14:31:43 -- scripts/common.sh@367 -- # return 0 00:17:36.897 14:31:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.897 14:31:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.897 --rc genhtml_branch_coverage=1 00:17:36.897 --rc genhtml_function_coverage=1 00:17:36.897 --rc genhtml_legend=1 00:17:36.897 --rc geninfo_all_blocks=1 00:17:36.897 --rc geninfo_unexecuted_blocks=1 00:17:36.897 00:17:36.897 ' 00:17:36.897 14:31:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.897 --rc genhtml_branch_coverage=1 00:17:36.897 --rc genhtml_function_coverage=1 00:17:36.897 --rc genhtml_legend=1 00:17:36.897 --rc geninfo_all_blocks=1 00:17:36.897 --rc geninfo_unexecuted_blocks=1 00:17:36.897 00:17:36.897 ' 00:17:36.897 14:31:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.897 --rc genhtml_branch_coverage=1 00:17:36.897 --rc genhtml_function_coverage=1 00:17:36.897 --rc genhtml_legend=1 00:17:36.897 --rc geninfo_all_blocks=1 00:17:36.897 --rc geninfo_unexecuted_blocks=1 00:17:36.897 00:17:36.897 ' 00:17:36.897 14:31:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.897 --rc genhtml_branch_coverage=1 00:17:36.897 --rc genhtml_function_coverage=1 00:17:36.897 --rc genhtml_legend=1 00:17:36.897 --rc geninfo_all_blocks=1 00:17:36.897 --rc geninfo_unexecuted_blocks=1 00:17:36.897 00:17:36.897 ' 00:17:36.897 14:31:43 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.897 14:31:43 -- nvmf/common.sh@7 -- # uname -s 00:17:36.897 14:31:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.897 14:31:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.897 14:31:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.897 14:31:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.897 14:31:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.897 14:31:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.897 14:31:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.897 14:31:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.897 14:31:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.897 14:31:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.897 14:31:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:36.897 14:31:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:36.897 14:31:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.897 14:31:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.897 14:31:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.898 14:31:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.898 14:31:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.898 14:31:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.898 14:31:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.898 14:31:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.898 14:31:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.898 14:31:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.898 14:31:43 -- paths/export.sh@5 -- # export PATH 00:17:36.898 14:31:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.898 14:31:43 -- nvmf/common.sh@46 -- # : 0 00:17:36.898 14:31:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.898 14:31:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.898 14:31:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.898 14:31:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.898 14:31:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.898 14:31:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.898 14:31:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.898 14:31:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.898 14:31:43 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.898 14:31:43 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.898 14:31:43 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:36.898 14:31:43 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:36.898 14:31:43 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:36.898 14:31:43 -- compliance/compliance.sh@20 -- # nvmfpid=72097 00:17:36.898 14:31:43 -- compliance/compliance.sh@21 -- # echo 'Process pid: 72097' 00:17:36.898 Process pid: 72097 00:17:36.898 14:31:43 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:36.898 14:31:43 -- compliance/compliance.sh@24 -- # waitforlisten 72097 00:17:36.898 14:31:43 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:36.898 14:31:43 -- common/autotest_common.sh@829 -- # '[' -z 72097 ']' 00:17:36.898 14:31:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.898 14:31:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.898 14:31:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.898 14:31:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.898 14:31:43 -- common/autotest_common.sh@10 -- # set +x 00:17:36.898 [2024-12-06 14:31:43.821937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.898 [2024-12-06 14:31:43.822034] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.156 [2024-12-06 14:31:43.956472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:37.156 [2024-12-06 14:31:44.069781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.156 [2024-12-06 14:31:44.069955] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.156 [2024-12-06 14:31:44.069968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.156 [2024-12-06 14:31:44.069977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.156 [2024-12-06 14:31:44.070148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.156 [2024-12-06 14:31:44.070756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.156 [2024-12-06 14:31:44.070761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.093 14:31:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.093 14:31:44 -- common/autotest_common.sh@862 -- # return 0 00:17:38.093 14:31:44 -- compliance/compliance.sh@26 -- # sleep 1 00:17:39.027 14:31:45 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:39.027 14:31:45 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:39.027 14:31:45 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:39.027 14:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 14:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.027 14:31:45 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:39.027 14:31:45 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:39.027 14:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 malloc0 00:17:39.027 14:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.027 14:31:45 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:39.027 14:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 14:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.027 14:31:45 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:39.027 14:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 14:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.027 14:31:45 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:39.027 14:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 14:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.027 14:31:45 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:39.285 00:17:39.285 00:17:39.285 CUnit - A unit testing framework for C - Version 2.1-3 00:17:39.285 http://cunit.sourceforge.net/ 00:17:39.285 00:17:39.285 00:17:39.285 Suite: nvme_compliance 00:17:39.285 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 14:31:46.140341] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:39.285 [2024-12-06 14:31:46.140421] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:39.285 [2024-12-06 14:31:46.140435] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:39.285 passed 00:17:39.543 Test: admin_identify_ctrlr_verify_fused ...passed 00:17:39.543 Test: admin_identify_ns ...[2024-12-06 14:31:46.390447] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:39.543 [2024-12-06 14:31:46.398427] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:39.543 passed 00:17:39.802 Test: admin_get_features_mandatory_features ...passed 00:17:39.802 Test: admin_get_features_optional_features ...passed 00:17:40.059 Test: admin_set_features_number_of_queues ...passed 00:17:40.059 Test: admin_get_log_page_mandatory_logs ...passed 00:17:40.317 Test: admin_get_log_page_with_lpo ...[2024-12-06 14:31:47.055431] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:40.317 passed 00:17:40.317 Test: fabric_property_get ...passed 00:17:40.317 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 14:31:47.251085] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:40.575 passed 00:17:40.575 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 14:31:47.425434] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:40.575 [2024-12-06 14:31:47.441449] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:40.575 passed 00:17:40.575 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 14:31:47.538538] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:40.884 passed 00:17:40.884 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 14:31:47.702440] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:40.884 [2024-12-06 14:31:47.726451] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:40.884 passed 00:17:40.884 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 14:31:47.819490] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:40.884 [2024-12-06 14:31:47.819557] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:41.157 passed 00:17:41.157 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 14:31:48.002437] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:41.157 [2024-12-06 14:31:48.010478] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:41.157 [2024-12-06 14:31:48.018421] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:41.157 [2024-12-06 14:31:48.026418] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:41.157 passed 00:17:41.415 Test: admin_create_io_sq_verify_pc ...[2024-12-06 14:31:48.160467] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:41.415 passed 00:17:42.789 Test: admin_create_io_qp_max_qps ...[2024-12-06 14:31:49.365429] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:43.048 passed 00:17:43.048 Test: admin_create_io_sq_shared_cq ...[2024-12-06 14:31:49.966422] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:43.306 passed 00:17:43.306 00:17:43.306 Run Summary: Type Total Ran Passed Failed Inactive 00:17:43.306 suites 1 1 n/a 0 0 00:17:43.306 tests 18 18 18 0 0 00:17:43.306 asserts 360 360 360 0 n/a 00:17:43.306 00:17:43.306 Elapsed time = 1.614 seconds 00:17:43.306 14:31:50 -- compliance/compliance.sh@42 -- # killprocess 72097 00:17:43.306 14:31:50 -- common/autotest_common.sh@936 -- # '[' -z 72097 ']' 00:17:43.306 14:31:50 -- common/autotest_common.sh@940 -- # kill -0 72097 00:17:43.306 14:31:50 -- common/autotest_common.sh@941 -- # uname 00:17:43.306 14:31:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.306 14:31:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72097 00:17:43.306 14:31:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:43.306 14:31:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:43.306 14:31:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72097' 00:17:43.306 killing process with pid 72097 00:17:43.306 14:31:50 -- common/autotest_common.sh@955 -- # kill 72097 00:17:43.306 14:31:50 -- common/autotest_common.sh@960 -- # wait 72097 00:17:43.564 14:31:50 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:43.564 14:31:50 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:43.564 00:17:43.564 real 0m6.794s 00:17:43.564 user 0m19.008s 00:17:43.564 sys 0m0.542s 00:17:43.564 14:31:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:43.564 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:17:43.564 ************************************ 00:17:43.564 END TEST nvmf_vfio_user_nvme_compliance 00:17:43.564 ************************************ 00:17:43.564 14:31:50 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:43.564 14:31:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:43.564 14:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.564 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:17:43.564 ************************************ 00:17:43.564 START TEST nvmf_vfio_user_fuzz 00:17:43.564 ************************************ 00:17:43.564 14:31:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:43.564 * Looking for test storage... 00:17:43.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:43.564 14:31:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:43.564 14:31:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:43.564 14:31:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:43.822 14:31:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:43.822 14:31:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:43.822 14:31:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:43.822 14:31:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:43.822 14:31:50 -- scripts/common.sh@335 -- # IFS=.-: 00:17:43.822 14:31:50 -- scripts/common.sh@335 -- # read -ra ver1 00:17:43.822 14:31:50 -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.822 14:31:50 -- scripts/common.sh@336 -- # read -ra ver2 00:17:43.822 14:31:50 -- scripts/common.sh@337 -- # local 'op=<' 00:17:43.822 14:31:50 -- scripts/common.sh@339 -- # ver1_l=2 00:17:43.822 14:31:50 -- scripts/common.sh@340 -- # ver2_l=1 00:17:43.822 14:31:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:43.822 14:31:50 -- scripts/common.sh@343 -- # case "$op" in 00:17:43.822 14:31:50 -- scripts/common.sh@344 -- # : 1 00:17:43.822 14:31:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:43.822 14:31:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.822 14:31:50 -- scripts/common.sh@364 -- # decimal 1 00:17:43.822 14:31:50 -- scripts/common.sh@352 -- # local d=1 00:17:43.822 14:31:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.822 14:31:50 -- scripts/common.sh@354 -- # echo 1 00:17:43.822 14:31:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:43.822 14:31:50 -- scripts/common.sh@365 -- # decimal 2 00:17:43.822 14:31:50 -- scripts/common.sh@352 -- # local d=2 00:17:43.822 14:31:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.822 14:31:50 -- scripts/common.sh@354 -- # echo 2 00:17:43.822 14:31:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:43.822 14:31:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:43.822 14:31:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:43.822 14:31:50 -- scripts/common.sh@367 -- # return 0 00:17:43.822 14:31:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.822 14:31:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.822 --rc genhtml_branch_coverage=1 00:17:43.822 --rc genhtml_function_coverage=1 00:17:43.822 --rc genhtml_legend=1 00:17:43.822 --rc geninfo_all_blocks=1 00:17:43.822 --rc geninfo_unexecuted_blocks=1 00:17:43.822 00:17:43.822 ' 00:17:43.822 14:31:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.822 --rc genhtml_branch_coverage=1 00:17:43.822 --rc genhtml_function_coverage=1 00:17:43.822 --rc genhtml_legend=1 00:17:43.822 --rc geninfo_all_blocks=1 00:17:43.822 --rc geninfo_unexecuted_blocks=1 00:17:43.822 00:17:43.822 ' 00:17:43.822 14:31:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.822 --rc genhtml_branch_coverage=1 00:17:43.822 --rc genhtml_function_coverage=1 00:17:43.822 --rc genhtml_legend=1 00:17:43.822 --rc geninfo_all_blocks=1 00:17:43.822 --rc geninfo_unexecuted_blocks=1 00:17:43.822 00:17:43.822 ' 00:17:43.822 14:31:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.822 --rc genhtml_branch_coverage=1 00:17:43.822 --rc genhtml_function_coverage=1 00:17:43.822 --rc genhtml_legend=1 00:17:43.822 --rc geninfo_all_blocks=1 00:17:43.822 --rc geninfo_unexecuted_blocks=1 00:17:43.822 00:17:43.822 ' 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.822 14:31:50 -- nvmf/common.sh@7 -- # uname -s 00:17:43.822 14:31:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.822 14:31:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.822 14:31:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.822 14:31:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.822 14:31:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.822 14:31:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.822 14:31:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.822 14:31:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.822 14:31:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.822 14:31:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.822 14:31:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:43.822 14:31:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:43.822 14:31:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.822 14:31:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.822 14:31:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.822 14:31:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.822 14:31:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.822 14:31:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.822 14:31:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.822 14:31:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.822 14:31:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.822 14:31:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.822 14:31:50 -- paths/export.sh@5 -- # export PATH 00:17:43.822 14:31:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.822 14:31:50 -- nvmf/common.sh@46 -- # : 0 00:17:43.822 14:31:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:43.822 14:31:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:43.822 14:31:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:43.822 14:31:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.822 14:31:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.822 14:31:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:43.822 14:31:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:43.822 14:31:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=72257 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 72257' 00:17:43.822 Process pid: 72257 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:43.822 14:31:50 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 72257 00:17:43.822 14:31:50 -- common/autotest_common.sh@829 -- # '[' -z 72257 ']' 00:17:43.822 14:31:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.822 14:31:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.822 14:31:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.822 14:31:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.822 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:17:44.754 14:31:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.754 14:31:51 -- common/autotest_common.sh@862 -- # return 0 00:17:44.754 14:31:51 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:46.126 14:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.126 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 14:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:46.126 14:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.126 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 malloc0 00:17:46.126 14:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:46.126 14:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.126 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 14:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:46.126 14:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.126 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 14:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:46.126 14:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.126 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 14:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:46.126 14:31:52 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:46.384 Shutting down the fuzz application 00:17:46.384 14:31:53 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:46.384 14:31:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.384 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:17:46.384 14:31:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.384 14:31:53 -- target/vfio_user_fuzz.sh@46 -- # killprocess 72257 00:17:46.384 14:31:53 -- common/autotest_common.sh@936 -- # '[' -z 72257 ']' 00:17:46.384 14:31:53 -- common/autotest_common.sh@940 -- # kill -0 72257 00:17:46.384 14:31:53 -- common/autotest_common.sh@941 -- # uname 00:17:46.384 14:31:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:46.384 14:31:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72257 00:17:46.384 14:31:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:46.384 14:31:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:46.384 killing process with pid 72257 00:17:46.384 14:31:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72257' 00:17:46.384 14:31:53 -- common/autotest_common.sh@955 -- # kill 72257 00:17:46.384 14:31:53 -- common/autotest_common.sh@960 -- # wait 72257 00:17:46.641 14:31:53 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:46.641 14:31:53 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:46.641 00:17:46.641 real 0m3.082s 00:17:46.641 user 0m3.429s 00:17:46.641 sys 0m0.412s 00:17:46.641 14:31:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:46.641 ************************************ 00:17:46.641 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:17:46.641 END TEST nvmf_vfio_user_fuzz 00:17:46.641 ************************************ 00:17:46.641 14:31:53 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:46.641 14:31:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.641 14:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.641 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:17:46.641 ************************************ 00:17:46.641 START TEST nvmf_host_management 00:17:46.641 ************************************ 00:17:46.641 14:31:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:46.898 * Looking for test storage... 00:17:46.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.898 14:31:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:46.898 14:31:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:46.898 14:31:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:46.898 14:31:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:46.898 14:31:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:46.898 14:31:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:46.898 14:31:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:46.898 14:31:53 -- scripts/common.sh@335 -- # IFS=.-: 00:17:46.898 14:31:53 -- scripts/common.sh@335 -- # read -ra ver1 00:17:46.898 14:31:53 -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.898 14:31:53 -- scripts/common.sh@336 -- # read -ra ver2 00:17:46.898 14:31:53 -- scripts/common.sh@337 -- # local 'op=<' 00:17:46.898 14:31:53 -- scripts/common.sh@339 -- # ver1_l=2 00:17:46.898 14:31:53 -- scripts/common.sh@340 -- # ver2_l=1 00:17:46.898 14:31:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:46.898 14:31:53 -- scripts/common.sh@343 -- # case "$op" in 00:17:46.898 14:31:53 -- scripts/common.sh@344 -- # : 1 00:17:46.898 14:31:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:46.898 14:31:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.898 14:31:53 -- scripts/common.sh@364 -- # decimal 1 00:17:46.898 14:31:53 -- scripts/common.sh@352 -- # local d=1 00:17:46.898 14:31:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.898 14:31:53 -- scripts/common.sh@354 -- # echo 1 00:17:46.898 14:31:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:46.898 14:31:53 -- scripts/common.sh@365 -- # decimal 2 00:17:46.898 14:31:53 -- scripts/common.sh@352 -- # local d=2 00:17:46.898 14:31:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.898 14:31:53 -- scripts/common.sh@354 -- # echo 2 00:17:46.898 14:31:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:46.898 14:31:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:46.898 14:31:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:46.898 14:31:53 -- scripts/common.sh@367 -- # return 0 00:17:46.898 14:31:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.898 14:31:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:46.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.898 --rc genhtml_branch_coverage=1 00:17:46.898 --rc genhtml_function_coverage=1 00:17:46.898 --rc genhtml_legend=1 00:17:46.898 --rc geninfo_all_blocks=1 00:17:46.898 --rc geninfo_unexecuted_blocks=1 00:17:46.898 00:17:46.898 ' 00:17:46.898 14:31:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:46.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.898 --rc genhtml_branch_coverage=1 00:17:46.898 --rc genhtml_function_coverage=1 00:17:46.898 --rc genhtml_legend=1 00:17:46.898 --rc geninfo_all_blocks=1 00:17:46.898 --rc geninfo_unexecuted_blocks=1 00:17:46.898 00:17:46.898 ' 00:17:46.898 14:31:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:46.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.898 --rc genhtml_branch_coverage=1 00:17:46.898 --rc genhtml_function_coverage=1 00:17:46.898 --rc genhtml_legend=1 00:17:46.898 --rc geninfo_all_blocks=1 00:17:46.898 --rc geninfo_unexecuted_blocks=1 00:17:46.898 00:17:46.898 ' 00:17:46.898 14:31:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:46.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.898 --rc genhtml_branch_coverage=1 00:17:46.898 --rc genhtml_function_coverage=1 00:17:46.899 --rc genhtml_legend=1 00:17:46.899 --rc geninfo_all_blocks=1 00:17:46.899 --rc geninfo_unexecuted_blocks=1 00:17:46.899 00:17:46.899 ' 00:17:46.899 14:31:53 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.899 14:31:53 -- nvmf/common.sh@7 -- # uname -s 00:17:46.899 14:31:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.899 14:31:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.899 14:31:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.899 14:31:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.899 14:31:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.899 14:31:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.899 14:31:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.899 14:31:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.899 14:31:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.899 14:31:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.899 14:31:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:46.899 14:31:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:46.899 14:31:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.899 14:31:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.899 14:31:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.899 14:31:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.899 14:31:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.899 14:31:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.899 14:31:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.899 14:31:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.899 14:31:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.899 14:31:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.899 14:31:53 -- paths/export.sh@5 -- # export PATH 00:17:46.899 14:31:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.899 14:31:53 -- nvmf/common.sh@46 -- # : 0 00:17:46.899 14:31:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:46.899 14:31:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:46.899 14:31:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:46.899 14:31:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.899 14:31:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.899 14:31:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:46.899 14:31:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:46.899 14:31:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:46.899 14:31:53 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.899 14:31:53 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.899 14:31:53 -- target/host_management.sh@104 -- # nvmftestinit 00:17:46.899 14:31:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:46.899 14:31:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.899 14:31:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:46.899 14:31:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:46.899 14:31:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:46.899 14:31:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.899 14:31:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.899 14:31:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.899 14:31:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:46.899 14:31:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:46.899 14:31:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:46.899 14:31:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:46.899 14:31:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:46.899 14:31:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:46.899 14:31:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.899 14:31:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.899 14:31:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:46.899 14:31:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:46.899 14:31:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.899 14:31:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.899 14:31:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.899 14:31:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.899 14:31:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.899 14:31:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.899 14:31:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.899 14:31:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.899 14:31:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:46.899 14:31:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:46.899 Cannot find device "nvmf_tgt_br" 00:17:46.899 14:31:53 -- nvmf/common.sh@154 -- # true 00:17:46.899 14:31:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.899 Cannot find device "nvmf_tgt_br2" 00:17:46.899 14:31:53 -- nvmf/common.sh@155 -- # true 00:17:46.899 14:31:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:46.899 14:31:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:46.899 Cannot find device "nvmf_tgt_br" 00:17:46.899 14:31:53 -- nvmf/common.sh@157 -- # true 00:17:46.899 14:31:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:46.899 Cannot find device "nvmf_tgt_br2" 00:17:46.899 14:31:53 -- nvmf/common.sh@158 -- # true 00:17:46.899 14:31:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:47.156 14:31:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:47.156 14:31:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.156 14:31:53 -- nvmf/common.sh@161 -- # true 00:17:47.156 14:31:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.156 14:31:53 -- nvmf/common.sh@162 -- # true 00:17:47.156 14:31:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.156 14:31:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.156 14:31:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.156 14:31:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.156 14:31:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.156 14:31:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.156 14:31:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.156 14:31:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:47.156 14:31:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:47.156 14:31:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:47.156 14:31:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:47.156 14:31:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:47.156 14:31:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:47.156 14:31:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.156 14:31:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.156 14:31:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.156 14:31:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:47.156 14:31:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:47.156 14:31:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.156 14:31:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.156 14:31:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.156 14:31:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.156 14:31:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.156 14:31:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:47.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:17:47.156 00:17:47.156 --- 10.0.0.2 ping statistics --- 00:17:47.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.157 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:47.157 14:31:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:47.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:47.157 00:17:47.157 --- 10.0.0.3 ping statistics --- 00:17:47.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.157 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:47.157 14:31:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:47.157 00:17:47.157 --- 10.0.0.1 ping statistics --- 00:17:47.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.157 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:47.157 14:31:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.157 14:31:54 -- nvmf/common.sh@421 -- # return 0 00:17:47.157 14:31:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:47.157 14:31:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.157 14:31:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:47.157 14:31:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:47.157 14:31:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.157 14:31:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:47.157 14:31:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:47.157 14:31:54 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:47.157 14:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:47.157 14:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.157 14:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 ************************************ 00:17:47.157 START TEST nvmf_host_management 00:17:47.157 ************************************ 00:17:47.157 14:31:54 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:47.157 14:31:54 -- target/host_management.sh@69 -- # starttarget 00:17:47.157 14:31:54 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:47.157 14:31:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:47.157 14:31:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.157 14:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 14:31:54 -- nvmf/common.sh@469 -- # nvmfpid=72493 00:17:47.157 14:31:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:47.157 14:31:54 -- nvmf/common.sh@470 -- # waitforlisten 72493 00:17:47.157 14:31:54 -- common/autotest_common.sh@829 -- # '[' -z 72493 ']' 00:17:47.157 14:31:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.157 14:31:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.157 14:31:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.157 14:31:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.157 14:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:47.415 [2024-12-06 14:31:54.174693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:47.415 [2024-12-06 14:31:54.174961] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.415 [2024-12-06 14:31:54.317595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.673 [2024-12-06 14:31:54.470090] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:47.673 [2024-12-06 14:31:54.470333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.673 [2024-12-06 14:31:54.470362] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.673 [2024-12-06 14:31:54.470381] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.673 [2024-12-06 14:31:54.470630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.673 [2024-12-06 14:31:54.471017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.673 [2024-12-06 14:31:54.471172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:47.673 [2024-12-06 14:31:54.471191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.240 14:31:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.240 14:31:55 -- common/autotest_common.sh@862 -- # return 0 00:17:48.240 14:31:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.240 14:31:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.240 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 14:31:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.498 14:31:55 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.498 14:31:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.498 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 [2024-12-06 14:31:55.231776] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.498 14:31:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.498 14:31:55 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:48.498 14:31:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.498 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 14:31:55 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:17:48.498 14:31:55 -- target/host_management.sh@23 -- # cat 00:17:48.498 14:31:55 -- target/host_management.sh@30 -- # rpc_cmd 00:17:48.498 14:31:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.498 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 Malloc0 00:17:48.498 [2024-12-06 14:31:55.317108] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.498 14:31:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.498 14:31:55 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:48.498 14:31:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.498 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.498 14:31:55 -- target/host_management.sh@73 -- # perfpid=72571 00:17:48.498 14:31:55 -- target/host_management.sh@74 -- # waitforlisten 72571 /var/tmp/bdevperf.sock 00:17:48.498 14:31:55 -- common/autotest_common.sh@829 -- # '[' -z 72571 ']' 00:17:48.498 14:31:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.498 14:31:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.498 14:31:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.498 14:31:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.498 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 14:31:55 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:48.498 14:31:55 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:48.498 14:31:55 -- nvmf/common.sh@520 -- # config=() 00:17:48.498 14:31:55 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.498 14:31:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.498 14:31:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.498 { 00:17:48.498 "params": { 00:17:48.498 "name": "Nvme$subsystem", 00:17:48.498 "trtype": "$TEST_TRANSPORT", 00:17:48.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.498 "adrfam": "ipv4", 00:17:48.498 "trsvcid": "$NVMF_PORT", 00:17:48.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.498 "hdgst": ${hdgst:-false}, 00:17:48.498 "ddgst": ${ddgst:-false} 00:17:48.498 }, 00:17:48.498 "method": "bdev_nvme_attach_controller" 00:17:48.498 } 00:17:48.498 EOF 00:17:48.498 )") 00:17:48.498 14:31:55 -- nvmf/common.sh@542 -- # cat 00:17:48.498 14:31:55 -- nvmf/common.sh@544 -- # jq . 00:17:48.498 14:31:55 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.498 14:31:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.498 "params": { 00:17:48.498 "name": "Nvme0", 00:17:48.498 "trtype": "tcp", 00:17:48.498 "traddr": "10.0.0.2", 00:17:48.498 "adrfam": "ipv4", 00:17:48.498 "trsvcid": "4420", 00:17:48.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:48.498 "hdgst": false, 00:17:48.498 "ddgst": false 00:17:48.498 }, 00:17:48.498 "method": "bdev_nvme_attach_controller" 00:17:48.498 }' 00:17:48.498 [2024-12-06 14:31:55.421355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:48.498 [2024-12-06 14:31:55.421639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72571 ] 00:17:48.756 [2024-12-06 14:31:55.561958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.756 [2024-12-06 14:31:55.673743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.016 Running I/O for 10 seconds... 00:17:49.583 14:31:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.583 14:31:56 -- common/autotest_common.sh@862 -- # return 0 00:17:49.583 14:31:56 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:49.583 14:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.583 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:17:49.583 14:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.583 14:31:56 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.583 14:31:56 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:49.583 14:31:56 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:49.583 14:31:56 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:49.583 14:31:56 -- target/host_management.sh@52 -- # local ret=1 00:17:49.583 14:31:56 -- target/host_management.sh@53 -- # local i 00:17:49.583 14:31:56 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:49.583 14:31:56 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:49.583 14:31:56 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:49.583 14:31:56 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:49.583 14:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.583 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:17:49.583 14:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.583 14:31:56 -- target/host_management.sh@55 -- # read_io_count=1991 00:17:49.583 14:31:56 -- target/host_management.sh@58 -- # '[' 1991 -ge 100 ']' 00:17:49.583 14:31:56 -- target/host_management.sh@59 -- # ret=0 00:17:49.583 14:31:56 -- target/host_management.sh@60 -- # break 00:17:49.583 14:31:56 -- target/host_management.sh@64 -- # return 0 00:17:49.583 14:31:56 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:49.583 14:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.583 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:17:49.583 [2024-12-06 14:31:56.510580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.510907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.583 [2024-12-06 14:31:56.511349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b2910 is same with the state(5) to be set 00:17:49.584 [2024-12-06 14:31:56.511843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.511875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.511897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.511908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.511920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.511929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.511941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.511950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.511963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.511972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.511983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.511993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.512004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.512013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.512024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.512033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.584 [2024-12-06 14:31:56.512044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.584 [2024-12-06 14:31:56.512053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.585 [2024-12-06 14:31:56.512647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.585 [2024-12-06 14:31:56.512656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.512986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.512997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.586 [2024-12-06 14:31:56.513212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.586 [2024-12-06 14:31:56.513300] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x832400 was disconnected and freed. reset controller. 00:17:49.586 [2024-12-06 14:31:56.514500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:49.587 task offset: 17792 on job bdev=Nvme0n1 fails 00:17:49.587 00:17:49.587 Latency(us) 00:17:49.587 [2024-12-06T14:31:56.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.587 [2024-12-06T14:31:56.557Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:49.587 [2024-12-06T14:31:56.557Z] Job: Nvme0n1 ended in about 0.66 seconds with error 00:17:49.587 Verification LBA range: start 0x0 length 0x400 00:17:49.587 Nvme0n1 : 0.66 3282.70 205.17 96.73 0.00 18607.26 1876.71 24188.74 00:17:49.587 [2024-12-06T14:31:56.557Z] =================================================================================================================== 00:17:49.587 [2024-12-06T14:31:56.557Z] Total : 3282.70 205.17 96.73 0.00 18607.26 1876.71 24188.74 00:17:49.587 [2024-12-06 14:31:56.516768] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:49.587 [2024-12-06 14:31:56.516896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85edc0 (9): Bad file descriptor 00:17:49.587 14:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.587 14:31:56 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:49.587 14:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.587 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:17:49.587 [2024-12-06 14:31:56.529496] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:49.587 14:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.587 14:31:56 -- target/host_management.sh@87 -- # sleep 1 00:17:50.621 14:31:57 -- target/host_management.sh@91 -- # kill -9 72571 00:17:50.621 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72571) - No such process 00:17:50.621 14:31:57 -- target/host_management.sh@91 -- # true 00:17:50.621 14:31:57 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:50.621 14:31:57 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:50.621 14:31:57 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:50.621 14:31:57 -- nvmf/common.sh@520 -- # config=() 00:17:50.621 14:31:57 -- nvmf/common.sh@520 -- # local subsystem config 00:17:50.621 14:31:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:50.621 14:31:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:50.621 { 00:17:50.621 "params": { 00:17:50.621 "name": "Nvme$subsystem", 00:17:50.621 "trtype": "$TEST_TRANSPORT", 00:17:50.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.621 "adrfam": "ipv4", 00:17:50.621 "trsvcid": "$NVMF_PORT", 00:17:50.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.621 "hdgst": ${hdgst:-false}, 00:17:50.621 "ddgst": ${ddgst:-false} 00:17:50.621 }, 00:17:50.621 "method": "bdev_nvme_attach_controller" 00:17:50.621 } 00:17:50.621 EOF 00:17:50.621 )") 00:17:50.621 14:31:57 -- nvmf/common.sh@542 -- # cat 00:17:50.621 14:31:57 -- nvmf/common.sh@544 -- # jq . 00:17:50.621 14:31:57 -- nvmf/common.sh@545 -- # IFS=, 00:17:50.621 14:31:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:50.621 "params": { 00:17:50.621 "name": "Nvme0", 00:17:50.621 "trtype": "tcp", 00:17:50.621 "traddr": "10.0.0.2", 00:17:50.621 "adrfam": "ipv4", 00:17:50.621 "trsvcid": "4420", 00:17:50.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:50.621 "hdgst": false, 00:17:50.621 "ddgst": false 00:17:50.621 }, 00:17:50.621 "method": "bdev_nvme_attach_controller" 00:17:50.621 }' 00:17:50.621 [2024-12-06 14:31:57.585963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:50.621 [2024-12-06 14:31:57.586048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72621 ] 00:17:50.879 [2024-12-06 14:31:57.716503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.879 [2024-12-06 14:31:57.805717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.137 Running I/O for 1 seconds... 00:17:52.071 00:17:52.071 Latency(us) 00:17:52.071 [2024-12-06T14:31:59.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.071 [2024-12-06T14:31:59.041Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:52.071 Verification LBA range: start 0x0 length 0x400 00:17:52.071 Nvme0n1 : 1.01 3489.87 218.12 0.00 0.00 18036.90 819.20 23831.27 00:17:52.071 [2024-12-06T14:31:59.041Z] =================================================================================================================== 00:17:52.071 [2024-12-06T14:31:59.041Z] Total : 3489.87 218.12 0.00 0.00 18036.90 819.20 23831.27 00:17:52.330 14:31:59 -- target/host_management.sh@101 -- # stoptarget 00:17:52.330 14:31:59 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:52.330 14:31:59 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:17:52.330 14:31:59 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:17:52.330 14:31:59 -- target/host_management.sh@40 -- # nvmftestfini 00:17:52.330 14:31:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.330 14:31:59 -- nvmf/common.sh@116 -- # sync 00:17:52.330 14:31:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:52.330 14:31:59 -- nvmf/common.sh@119 -- # set +e 00:17:52.330 14:31:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.330 14:31:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:52.330 rmmod nvme_tcp 00:17:52.330 rmmod nvme_fabrics 00:17:52.588 rmmod nvme_keyring 00:17:52.588 14:31:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.588 14:31:59 -- nvmf/common.sh@123 -- # set -e 00:17:52.588 14:31:59 -- nvmf/common.sh@124 -- # return 0 00:17:52.588 14:31:59 -- nvmf/common.sh@477 -- # '[' -n 72493 ']' 00:17:52.588 14:31:59 -- nvmf/common.sh@478 -- # killprocess 72493 00:17:52.588 14:31:59 -- common/autotest_common.sh@936 -- # '[' -z 72493 ']' 00:17:52.588 14:31:59 -- common/autotest_common.sh@940 -- # kill -0 72493 00:17:52.588 14:31:59 -- common/autotest_common.sh@941 -- # uname 00:17:52.588 14:31:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.588 14:31:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72493 00:17:52.588 killing process with pid 72493 00:17:52.588 14:31:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:52.588 14:31:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:52.588 14:31:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72493' 00:17:52.588 14:31:59 -- common/autotest_common.sh@955 -- # kill 72493 00:17:52.588 14:31:59 -- common/autotest_common.sh@960 -- # wait 72493 00:17:52.847 [2024-12-06 14:31:59.604514] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:52.847 14:31:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:52.847 14:31:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:52.847 14:31:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:52.847 14:31:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.847 14:31:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:52.847 14:31:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.847 14:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.847 14:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.847 14:31:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:52.847 00:17:52.847 real 0m5.564s 00:17:52.847 user 0m23.139s 00:17:52.847 sys 0m1.288s 00:17:52.847 14:31:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.847 ************************************ 00:17:52.847 END TEST nvmf_host_management 00:17:52.847 ************************************ 00:17:52.847 14:31:59 -- common/autotest_common.sh@10 -- # set +x 00:17:52.847 14:31:59 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:52.847 00:17:52.847 real 0m6.164s 00:17:52.847 user 0m23.337s 00:17:52.847 sys 0m1.550s 00:17:52.847 ************************************ 00:17:52.847 END TEST nvmf_host_management 00:17:52.847 ************************************ 00:17:52.847 14:31:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.847 14:31:59 -- common/autotest_common.sh@10 -- # set +x 00:17:52.847 14:31:59 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:52.847 14:31:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:52.847 14:31:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.847 14:31:59 -- common/autotest_common.sh@10 -- # set +x 00:17:52.847 ************************************ 00:17:52.847 START TEST nvmf_lvol 00:17:52.847 ************************************ 00:17:52.847 14:31:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:53.106 * Looking for test storage... 00:17:53.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:53.106 14:31:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:53.106 14:31:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:53.106 14:31:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:53.106 14:31:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:53.106 14:31:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:53.106 14:31:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.106 14:31:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.106 14:31:59 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.106 14:31:59 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.106 14:31:59 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.106 14:31:59 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.106 14:31:59 -- scripts/common.sh@337 -- # local 'op=<' 00:17:53.106 14:31:59 -- scripts/common.sh@339 -- # ver1_l=2 00:17:53.106 14:31:59 -- scripts/common.sh@340 -- # ver2_l=1 00:17:53.106 14:31:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.106 14:31:59 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.106 14:31:59 -- scripts/common.sh@344 -- # : 1 00:17:53.106 14:31:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.106 14:31:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.106 14:31:59 -- scripts/common.sh@364 -- # decimal 1 00:17:53.106 14:31:59 -- scripts/common.sh@352 -- # local d=1 00:17:53.106 14:31:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.106 14:31:59 -- scripts/common.sh@354 -- # echo 1 00:17:53.106 14:31:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.106 14:31:59 -- scripts/common.sh@365 -- # decimal 2 00:17:53.106 14:31:59 -- scripts/common.sh@352 -- # local d=2 00:17:53.106 14:31:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.106 14:31:59 -- scripts/common.sh@354 -- # echo 2 00:17:53.106 14:31:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:53.106 14:31:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.106 14:31:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.106 14:31:59 -- scripts/common.sh@367 -- # return 0 00:17:53.106 14:31:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.106 14:31:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:53.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.106 --rc genhtml_branch_coverage=1 00:17:53.107 --rc genhtml_function_coverage=1 00:17:53.107 --rc genhtml_legend=1 00:17:53.107 --rc geninfo_all_blocks=1 00:17:53.107 --rc geninfo_unexecuted_blocks=1 00:17:53.107 00:17:53.107 ' 00:17:53.107 14:31:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:53.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.107 --rc genhtml_branch_coverage=1 00:17:53.107 --rc genhtml_function_coverage=1 00:17:53.107 --rc genhtml_legend=1 00:17:53.107 --rc geninfo_all_blocks=1 00:17:53.107 --rc geninfo_unexecuted_blocks=1 00:17:53.107 00:17:53.107 ' 00:17:53.107 14:31:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:53.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.107 --rc genhtml_branch_coverage=1 00:17:53.107 --rc genhtml_function_coverage=1 00:17:53.107 --rc genhtml_legend=1 00:17:53.107 --rc geninfo_all_blocks=1 00:17:53.107 --rc geninfo_unexecuted_blocks=1 00:17:53.107 00:17:53.107 ' 00:17:53.107 14:31:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:53.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.107 --rc genhtml_branch_coverage=1 00:17:53.107 --rc genhtml_function_coverage=1 00:17:53.107 --rc genhtml_legend=1 00:17:53.107 --rc geninfo_all_blocks=1 00:17:53.107 --rc geninfo_unexecuted_blocks=1 00:17:53.107 00:17:53.107 ' 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:53.107 14:31:59 -- nvmf/common.sh@7 -- # uname -s 00:17:53.107 14:31:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.107 14:31:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.107 14:31:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.107 14:31:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.107 14:31:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.107 14:31:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.107 14:31:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.107 14:31:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.107 14:31:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.107 14:31:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.107 14:31:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:53.107 14:31:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:17:53.107 14:31:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.107 14:31:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.107 14:31:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:53.107 14:31:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:53.107 14:31:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.107 14:31:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.107 14:31:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.107 14:31:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.107 14:31:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.107 14:31:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.107 14:31:59 -- paths/export.sh@5 -- # export PATH 00:17:53.107 14:31:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.107 14:31:59 -- nvmf/common.sh@46 -- # : 0 00:17:53.107 14:31:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.107 14:31:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.107 14:31:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.107 14:31:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.107 14:31:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.107 14:31:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.107 14:31:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.107 14:31:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:53.107 14:31:59 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:53.107 14:31:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:53.107 14:31:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.107 14:31:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.107 14:31:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.107 14:31:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.107 14:31:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.107 14:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.107 14:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.107 14:31:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:53.107 14:31:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:53.107 14:31:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:53.107 14:31:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:53.107 14:31:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:53.107 14:31:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:53.107 14:31:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.107 14:31:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.107 14:31:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:53.107 14:31:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:53.107 14:31:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:53.107 14:31:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:53.107 14:31:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:53.107 14:31:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.107 14:31:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:53.107 14:31:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:53.107 14:31:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:53.107 14:31:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:53.107 14:31:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:53.107 14:31:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:53.107 Cannot find device "nvmf_tgt_br" 00:17:53.107 14:32:00 -- nvmf/common.sh@154 -- # true 00:17:53.107 14:32:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.107 Cannot find device "nvmf_tgt_br2" 00:17:53.107 14:32:00 -- nvmf/common.sh@155 -- # true 00:17:53.107 14:32:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:53.107 14:32:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:53.108 Cannot find device "nvmf_tgt_br" 00:17:53.108 14:32:00 -- nvmf/common.sh@157 -- # true 00:17:53.108 14:32:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:53.108 Cannot find device "nvmf_tgt_br2" 00:17:53.108 14:32:00 -- nvmf/common.sh@158 -- # true 00:17:53.108 14:32:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:53.366 14:32:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:53.366 14:32:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.366 14:32:00 -- nvmf/common.sh@161 -- # true 00:17:53.366 14:32:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.366 14:32:00 -- nvmf/common.sh@162 -- # true 00:17:53.366 14:32:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.366 14:32:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.366 14:32:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.366 14:32:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.366 14:32:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.366 14:32:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.366 14:32:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.366 14:32:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:53.366 14:32:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:53.366 14:32:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:53.366 14:32:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:53.366 14:32:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:53.366 14:32:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:53.366 14:32:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.366 14:32:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.366 14:32:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.366 14:32:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:53.366 14:32:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:53.366 14:32:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.366 14:32:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.366 14:32:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.366 14:32:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.366 14:32:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.366 14:32:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:53.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:53.366 00:17:53.366 --- 10.0.0.2 ping statistics --- 00:17:53.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.367 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:53.367 14:32:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:53.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:17:53.367 00:17:53.367 --- 10.0.0.3 ping statistics --- 00:17:53.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.367 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:53.367 14:32:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:53.367 00:17:53.367 --- 10.0.0.1 ping statistics --- 00:17:53.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.367 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:53.367 14:32:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.367 14:32:00 -- nvmf/common.sh@421 -- # return 0 00:17:53.367 14:32:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:53.367 14:32:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.367 14:32:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:53.367 14:32:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:53.367 14:32:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.367 14:32:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:53.367 14:32:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:53.367 14:32:00 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:53.367 14:32:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:53.367 14:32:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.367 14:32:00 -- common/autotest_common.sh@10 -- # set +x 00:17:53.367 14:32:00 -- nvmf/common.sh@469 -- # nvmfpid=72865 00:17:53.367 14:32:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:53.367 14:32:00 -- nvmf/common.sh@470 -- # waitforlisten 72865 00:17:53.367 14:32:00 -- common/autotest_common.sh@829 -- # '[' -z 72865 ']' 00:17:53.367 14:32:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.367 14:32:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.367 14:32:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.367 14:32:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.367 14:32:00 -- common/autotest_common.sh@10 -- # set +x 00:17:53.625 [2024-12-06 14:32:00.385208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:53.625 [2024-12-06 14:32:00.385310] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.625 [2024-12-06 14:32:00.524394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.883 [2024-12-06 14:32:00.622480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:53.883 [2024-12-06 14:32:00.622667] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.883 [2024-12-06 14:32:00.622681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.883 [2024-12-06 14:32:00.622690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.883 [2024-12-06 14:32:00.623165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.883 [2024-12-06 14:32:00.623360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.883 [2024-12-06 14:32:00.623369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.450 14:32:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.450 14:32:01 -- common/autotest_common.sh@862 -- # return 0 00:17:54.450 14:32:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:54.450 14:32:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.450 14:32:01 -- common/autotest_common.sh@10 -- # set +x 00:17:54.450 14:32:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.450 14:32:01 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:54.708 [2024-12-06 14:32:01.674761] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.966 14:32:01 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:55.225 14:32:02 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:55.225 14:32:02 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:55.483 14:32:02 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:55.483 14:32:02 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:55.742 14:32:02 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:56.000 14:32:02 -- target/nvmf_lvol.sh@29 -- # lvs=77924549-6f9a-4e36-9148-bfd02e92b6db 00:17:56.000 14:32:02 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 77924549-6f9a-4e36-9148-bfd02e92b6db lvol 20 00:17:56.260 14:32:03 -- target/nvmf_lvol.sh@32 -- # lvol=2e23d01f-f8cc-4841-ba48-f277eaaad468 00:17:56.260 14:32:03 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:56.519 14:32:03 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e23d01f-f8cc-4841-ba48-f277eaaad468 00:17:56.778 14:32:03 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:57.167 [2024-12-06 14:32:03.954611] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.167 14:32:03 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:57.426 14:32:04 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:57.426 14:32:04 -- target/nvmf_lvol.sh@42 -- # perf_pid=73009 00:17:57.426 14:32:04 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:58.362 14:32:05 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2e23d01f-f8cc-4841-ba48-f277eaaad468 MY_SNAPSHOT 00:17:58.620 14:32:05 -- target/nvmf_lvol.sh@47 -- # snapshot=e22f8b5f-55b7-4b4e-887b-67b9fe0aab4a 00:17:58.620 14:32:05 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2e23d01f-f8cc-4841-ba48-f277eaaad468 30 00:17:58.877 14:32:05 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e22f8b5f-55b7-4b4e-887b-67b9fe0aab4a MY_CLONE 00:17:59.444 14:32:06 -- target/nvmf_lvol.sh@49 -- # clone=ea2fc720-bcc8-4d65-9c07-475cbeb2eeed 00:17:59.444 14:32:06 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ea2fc720-bcc8-4d65-9c07-475cbeb2eeed 00:18:00.014 14:32:06 -- target/nvmf_lvol.sh@53 -- # wait 73009 00:18:08.125 Initializing NVMe Controllers 00:18:08.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:08.125 Controller IO queue size 128, less than required. 00:18:08.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:08.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:08.125 Initialization complete. Launching workers. 00:18:08.125 ======================================================== 00:18:08.125 Latency(us) 00:18:08.125 Device Information : IOPS MiB/s Average min max 00:18:08.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10913.50 42.63 11735.80 2366.17 57588.16 00:18:08.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10925.80 42.68 11716.99 1636.51 49587.13 00:18:08.125 ======================================================== 00:18:08.125 Total : 21839.30 85.31 11726.39 1636.51 57588.16 00:18:08.125 00:18:08.125 14:32:14 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:08.125 14:32:14 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2e23d01f-f8cc-4841-ba48-f277eaaad468 00:18:08.125 14:32:15 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77924549-6f9a-4e36-9148-bfd02e92b6db 00:18:08.382 14:32:15 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:08.382 14:32:15 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:08.382 14:32:15 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:08.382 14:32:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:08.382 14:32:15 -- nvmf/common.sh@116 -- # sync 00:18:08.639 14:32:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:08.639 14:32:15 -- nvmf/common.sh@119 -- # set +e 00:18:08.639 14:32:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:08.639 14:32:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:08.639 rmmod nvme_tcp 00:18:08.639 rmmod nvme_fabrics 00:18:08.639 rmmod nvme_keyring 00:18:08.639 14:32:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:08.639 14:32:15 -- nvmf/common.sh@123 -- # set -e 00:18:08.639 14:32:15 -- nvmf/common.sh@124 -- # return 0 00:18:08.639 14:32:15 -- nvmf/common.sh@477 -- # '[' -n 72865 ']' 00:18:08.639 14:32:15 -- nvmf/common.sh@478 -- # killprocess 72865 00:18:08.639 14:32:15 -- common/autotest_common.sh@936 -- # '[' -z 72865 ']' 00:18:08.639 14:32:15 -- common/autotest_common.sh@940 -- # kill -0 72865 00:18:08.639 14:32:15 -- common/autotest_common.sh@941 -- # uname 00:18:08.639 14:32:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.639 14:32:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72865 00:18:08.639 killing process with pid 72865 00:18:08.639 14:32:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.639 14:32:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.639 14:32:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72865' 00:18:08.639 14:32:15 -- common/autotest_common.sh@955 -- # kill 72865 00:18:08.639 14:32:15 -- common/autotest_common.sh@960 -- # wait 72865 00:18:08.897 14:32:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:08.897 14:32:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:08.897 14:32:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:08.897 14:32:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.897 14:32:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:08.897 14:32:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.897 14:32:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.897 14:32:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.897 14:32:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:08.897 00:18:08.897 real 0m16.039s 00:18:08.897 user 1m6.779s 00:18:08.897 sys 0m3.690s 00:18:08.897 14:32:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:08.897 14:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:08.897 ************************************ 00:18:08.897 END TEST nvmf_lvol 00:18:08.897 ************************************ 00:18:08.897 14:32:15 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:08.897 14:32:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:08.897 14:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.897 14:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:08.897 ************************************ 00:18:08.897 START TEST nvmf_lvs_grow 00:18:08.897 ************************************ 00:18:08.897 14:32:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:09.157 * Looking for test storage... 00:18:09.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:09.157 14:32:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:09.157 14:32:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:09.157 14:32:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:09.157 14:32:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:09.157 14:32:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:09.157 14:32:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:09.158 14:32:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:09.158 14:32:16 -- scripts/common.sh@335 -- # IFS=.-: 00:18:09.158 14:32:16 -- scripts/common.sh@335 -- # read -ra ver1 00:18:09.158 14:32:16 -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.158 14:32:16 -- scripts/common.sh@336 -- # read -ra ver2 00:18:09.158 14:32:16 -- scripts/common.sh@337 -- # local 'op=<' 00:18:09.158 14:32:16 -- scripts/common.sh@339 -- # ver1_l=2 00:18:09.158 14:32:16 -- scripts/common.sh@340 -- # ver2_l=1 00:18:09.158 14:32:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:09.158 14:32:16 -- scripts/common.sh@343 -- # case "$op" in 00:18:09.158 14:32:16 -- scripts/common.sh@344 -- # : 1 00:18:09.158 14:32:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:09.158 14:32:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.158 14:32:16 -- scripts/common.sh@364 -- # decimal 1 00:18:09.158 14:32:16 -- scripts/common.sh@352 -- # local d=1 00:18:09.158 14:32:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.158 14:32:16 -- scripts/common.sh@354 -- # echo 1 00:18:09.158 14:32:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:09.158 14:32:16 -- scripts/common.sh@365 -- # decimal 2 00:18:09.158 14:32:16 -- scripts/common.sh@352 -- # local d=2 00:18:09.158 14:32:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.158 14:32:16 -- scripts/common.sh@354 -- # echo 2 00:18:09.158 14:32:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:09.158 14:32:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:09.158 14:32:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:09.158 14:32:16 -- scripts/common.sh@367 -- # return 0 00:18:09.158 14:32:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.158 14:32:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.158 --rc genhtml_branch_coverage=1 00:18:09.158 --rc genhtml_function_coverage=1 00:18:09.158 --rc genhtml_legend=1 00:18:09.158 --rc geninfo_all_blocks=1 00:18:09.158 --rc geninfo_unexecuted_blocks=1 00:18:09.158 00:18:09.158 ' 00:18:09.158 14:32:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.158 --rc genhtml_branch_coverage=1 00:18:09.158 --rc genhtml_function_coverage=1 00:18:09.158 --rc genhtml_legend=1 00:18:09.158 --rc geninfo_all_blocks=1 00:18:09.158 --rc geninfo_unexecuted_blocks=1 00:18:09.158 00:18:09.158 ' 00:18:09.158 14:32:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.158 --rc genhtml_branch_coverage=1 00:18:09.158 --rc genhtml_function_coverage=1 00:18:09.158 --rc genhtml_legend=1 00:18:09.158 --rc geninfo_all_blocks=1 00:18:09.158 --rc geninfo_unexecuted_blocks=1 00:18:09.158 00:18:09.158 ' 00:18:09.158 14:32:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.158 --rc genhtml_branch_coverage=1 00:18:09.158 --rc genhtml_function_coverage=1 00:18:09.158 --rc genhtml_legend=1 00:18:09.158 --rc geninfo_all_blocks=1 00:18:09.158 --rc geninfo_unexecuted_blocks=1 00:18:09.158 00:18:09.158 ' 00:18:09.158 14:32:16 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.158 14:32:16 -- nvmf/common.sh@7 -- # uname -s 00:18:09.158 14:32:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.158 14:32:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.158 14:32:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.158 14:32:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.158 14:32:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.158 14:32:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.158 14:32:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.158 14:32:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.158 14:32:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.158 14:32:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.158 14:32:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:18:09.158 14:32:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:18:09.158 14:32:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.158 14:32:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.158 14:32:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.158 14:32:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.158 14:32:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.158 14:32:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.158 14:32:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.158 14:32:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.158 14:32:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.158 14:32:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.158 14:32:16 -- paths/export.sh@5 -- # export PATH 00:18:09.158 14:32:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.158 14:32:16 -- nvmf/common.sh@46 -- # : 0 00:18:09.158 14:32:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.158 14:32:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.158 14:32:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.158 14:32:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.158 14:32:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.158 14:32:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.158 14:32:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.158 14:32:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.158 14:32:16 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.158 14:32:16 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.158 14:32:16 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:09.158 14:32:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.158 14:32:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.158 14:32:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.158 14:32:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.158 14:32:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.158 14:32:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.158 14:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.158 14:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.158 14:32:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:09.158 14:32:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:09.159 14:32:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:09.159 14:32:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:09.159 14:32:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:09.159 14:32:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:09.159 14:32:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.159 14:32:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.159 14:32:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:09.159 14:32:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:09.159 14:32:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.159 14:32:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.159 14:32:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.159 14:32:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.159 14:32:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.159 14:32:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.159 14:32:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.159 14:32:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.159 14:32:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:09.159 14:32:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:09.159 Cannot find device "nvmf_tgt_br" 00:18:09.159 14:32:16 -- nvmf/common.sh@154 -- # true 00:18:09.159 14:32:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.159 Cannot find device "nvmf_tgt_br2" 00:18:09.159 14:32:16 -- nvmf/common.sh@155 -- # true 00:18:09.159 14:32:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:09.420 14:32:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:09.420 Cannot find device "nvmf_tgt_br" 00:18:09.420 14:32:16 -- nvmf/common.sh@157 -- # true 00:18:09.420 14:32:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:09.420 Cannot find device "nvmf_tgt_br2" 00:18:09.420 14:32:16 -- nvmf/common.sh@158 -- # true 00:18:09.420 14:32:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:09.420 14:32:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:09.420 14:32:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.420 14:32:16 -- nvmf/common.sh@161 -- # true 00:18:09.420 14:32:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.420 14:32:16 -- nvmf/common.sh@162 -- # true 00:18:09.420 14:32:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.420 14:32:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.420 14:32:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.420 14:32:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.420 14:32:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.420 14:32:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.420 14:32:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.420 14:32:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:09.420 14:32:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:09.420 14:32:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:09.420 14:32:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:09.420 14:32:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:09.420 14:32:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:09.420 14:32:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.420 14:32:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.420 14:32:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.420 14:32:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:09.420 14:32:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:09.420 14:32:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.420 14:32:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.420 14:32:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.679 14:32:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.679 14:32:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.679 14:32:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:09.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:18:09.679 00:18:09.679 --- 10.0.0.2 ping statistics --- 00:18:09.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.679 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:18:09.679 14:32:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:09.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:18:09.679 00:18:09.679 --- 10.0.0.3 ping statistics --- 00:18:09.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.679 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:09.679 14:32:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:09.679 00:18:09.679 --- 10.0.0.1 ping statistics --- 00:18:09.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.679 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:09.679 14:32:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.679 14:32:16 -- nvmf/common.sh@421 -- # return 0 00:18:09.679 14:32:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:09.679 14:32:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.679 14:32:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:09.679 14:32:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:09.679 14:32:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.679 14:32:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:09.679 14:32:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:09.679 14:32:16 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:09.679 14:32:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.679 14:32:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.679 14:32:16 -- common/autotest_common.sh@10 -- # set +x 00:18:09.679 14:32:16 -- nvmf/common.sh@469 -- # nvmfpid=73386 00:18:09.679 14:32:16 -- nvmf/common.sh@470 -- # waitforlisten 73386 00:18:09.679 14:32:16 -- common/autotest_common.sh@829 -- # '[' -z 73386 ']' 00:18:09.679 14:32:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:09.679 14:32:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.679 14:32:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.679 14:32:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.679 14:32:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.679 14:32:16 -- common/autotest_common.sh@10 -- # set +x 00:18:09.679 [2024-12-06 14:32:16.512671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:09.680 [2024-12-06 14:32:16.512798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.938 [2024-12-06 14:32:16.653279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.938 [2024-12-06 14:32:16.747891] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:09.938 [2024-12-06 14:32:16.748056] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.938 [2024-12-06 14:32:16.748068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.939 [2024-12-06 14:32:16.748077] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.939 [2024-12-06 14:32:16.748106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.506 14:32:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.506 14:32:17 -- common/autotest_common.sh@862 -- # return 0 00:18:10.506 14:32:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.506 14:32:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.506 14:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:10.768 14:32:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.768 14:32:17 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:11.027 [2024-12-06 14:32:17.796388] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:11.027 14:32:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:11.027 14:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.027 14:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:11.027 ************************************ 00:18:11.027 START TEST lvs_grow_clean 00:18:11.027 ************************************ 00:18:11.027 14:32:17 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:11.027 14:32:17 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:11.286 14:32:18 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:11.286 14:32:18 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:11.544 14:32:18 -- target/nvmf_lvs_grow.sh@28 -- # lvs=101c8f0c-376f-471d-a840-a86117559717 00:18:11.544 14:32:18 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:11.544 14:32:18 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:11.803 14:32:18 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:11.803 14:32:18 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:11.803 14:32:18 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 101c8f0c-376f-471d-a840-a86117559717 lvol 150 00:18:12.061 14:32:18 -- target/nvmf_lvs_grow.sh@33 -- # lvol=86e1ac6f-9c41-4f62-a190-3aa928baab57 00:18:12.061 14:32:18 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:12.061 14:32:18 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:12.319 [2024-12-06 14:32:19.212488] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:12.319 [2024-12-06 14:32:19.212574] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:12.319 true 00:18:12.319 14:32:19 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:12.319 14:32:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:12.578 14:32:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:12.578 14:32:19 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:12.836 14:32:19 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86e1ac6f-9c41-4f62-a190-3aa928baab57 00:18:13.404 14:32:20 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:13.404 [2024-12-06 14:32:20.341193] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.404 14:32:20 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:13.663 14:32:20 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:13.663 14:32:20 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73548 00:18:13.664 14:32:20 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.664 14:32:20 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73548 /var/tmp/bdevperf.sock 00:18:13.664 14:32:20 -- common/autotest_common.sh@829 -- # '[' -z 73548 ']' 00:18:13.664 14:32:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.664 14:32:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.664 14:32:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.664 14:32:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.664 14:32:20 -- common/autotest_common.sh@10 -- # set +x 00:18:13.922 [2024-12-06 14:32:20.638131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:13.922 [2024-12-06 14:32:20.638222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73548 ] 00:18:13.923 [2024-12-06 14:32:20.776270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.181 [2024-12-06 14:32:20.898131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.748 14:32:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.748 14:32:21 -- common/autotest_common.sh@862 -- # return 0 00:18:14.749 14:32:21 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:15.007 Nvme0n1 00:18:15.008 14:32:21 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:15.266 [ 00:18:15.266 { 00:18:15.266 "aliases": [ 00:18:15.266 "86e1ac6f-9c41-4f62-a190-3aa928baab57" 00:18:15.266 ], 00:18:15.266 "assigned_rate_limits": { 00:18:15.266 "r_mbytes_per_sec": 0, 00:18:15.266 "rw_ios_per_sec": 0, 00:18:15.266 "rw_mbytes_per_sec": 0, 00:18:15.266 "w_mbytes_per_sec": 0 00:18:15.266 }, 00:18:15.266 "block_size": 4096, 00:18:15.266 "claimed": false, 00:18:15.266 "driver_specific": { 00:18:15.266 "mp_policy": "active_passive", 00:18:15.266 "nvme": [ 00:18:15.266 { 00:18:15.266 "ctrlr_data": { 00:18:15.266 "ana_reporting": false, 00:18:15.266 "cntlid": 1, 00:18:15.266 "firmware_revision": "24.01.1", 00:18:15.266 "model_number": "SPDK bdev Controller", 00:18:15.266 "multi_ctrlr": true, 00:18:15.266 "oacs": { 00:18:15.266 "firmware": 0, 00:18:15.266 "format": 0, 00:18:15.266 "ns_manage": 0, 00:18:15.266 "security": 0 00:18:15.266 }, 00:18:15.266 "serial_number": "SPDK0", 00:18:15.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:15.266 "vendor_id": "0x8086" 00:18:15.266 }, 00:18:15.266 "ns_data": { 00:18:15.266 "can_share": true, 00:18:15.266 "id": 1 00:18:15.266 }, 00:18:15.266 "trid": { 00:18:15.266 "adrfam": "IPv4", 00:18:15.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:15.266 "traddr": "10.0.0.2", 00:18:15.266 "trsvcid": "4420", 00:18:15.266 "trtype": "TCP" 00:18:15.266 }, 00:18:15.266 "vs": { 00:18:15.266 "nvme_version": "1.3" 00:18:15.266 } 00:18:15.266 } 00:18:15.266 ] 00:18:15.266 }, 00:18:15.266 "name": "Nvme0n1", 00:18:15.266 "num_blocks": 38912, 00:18:15.266 "product_name": "NVMe disk", 00:18:15.266 "supported_io_types": { 00:18:15.266 "abort": true, 00:18:15.266 "compare": true, 00:18:15.266 "compare_and_write": true, 00:18:15.266 "flush": true, 00:18:15.266 "nvme_admin": true, 00:18:15.266 "nvme_io": true, 00:18:15.266 "read": true, 00:18:15.266 "reset": true, 00:18:15.266 "unmap": true, 00:18:15.266 "write": true, 00:18:15.266 "write_zeroes": true 00:18:15.266 }, 00:18:15.266 "uuid": "86e1ac6f-9c41-4f62-a190-3aa928baab57", 00:18:15.266 "zoned": false 00:18:15.266 } 00:18:15.266 ] 00:18:15.266 14:32:22 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73601 00:18:15.266 14:32:22 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.266 14:32:22 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:15.524 Running I/O for 10 seconds... 00:18:16.458 Latency(us) 00:18:16.458 [2024-12-06T14:32:23.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.458 [2024-12-06T14:32:23.428Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.458 Nvme0n1 : 1.00 8072.00 31.53 0.00 0.00 0.00 0.00 0.00 00:18:16.458 [2024-12-06T14:32:23.428Z] =================================================================================================================== 00:18:16.458 [2024-12-06T14:32:23.428Z] Total : 8072.00 31.53 0.00 0.00 0.00 0.00 0.00 00:18:16.458 00:18:17.526 14:32:24 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 101c8f0c-376f-471d-a840-a86117559717 00:18:17.526 [2024-12-06T14:32:24.496Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.526 Nvme0n1 : 2.00 8095.50 31.62 0.00 0.00 0.00 0.00 0.00 00:18:17.526 [2024-12-06T14:32:24.496Z] =================================================================================================================== 00:18:17.526 [2024-12-06T14:32:24.496Z] Total : 8095.50 31.62 0.00 0.00 0.00 0.00 0.00 00:18:17.526 00:18:17.786 true 00:18:17.786 14:32:24 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:17.786 14:32:24 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:18.045 14:32:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:18.045 14:32:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:18.045 14:32:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 73601 00:18:18.611 [2024-12-06T14:32:25.581Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.611 Nvme0n1 : 3.00 8103.33 31.65 0.00 0.00 0.00 0.00 0.00 00:18:18.611 [2024-12-06T14:32:25.581Z] =================================================================================================================== 00:18:18.611 [2024-12-06T14:32:25.581Z] Total : 8103.33 31.65 0.00 0.00 0.00 0.00 0.00 00:18:18.611 00:18:19.567 [2024-12-06T14:32:26.537Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.567 Nvme0n1 : 4.00 8014.75 31.31 0.00 0.00 0.00 0.00 0.00 00:18:19.567 [2024-12-06T14:32:26.537Z] =================================================================================================================== 00:18:19.567 [2024-12-06T14:32:26.537Z] Total : 8014.75 31.31 0.00 0.00 0.00 0.00 0.00 00:18:19.567 00:18:20.500 [2024-12-06T14:32:27.470Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.500 Nvme0n1 : 5.00 8005.00 31.27 0.00 0.00 0.00 0.00 0.00 00:18:20.500 [2024-12-06T14:32:27.470Z] =================================================================================================================== 00:18:20.500 [2024-12-06T14:32:27.470Z] Total : 8005.00 31.27 0.00 0.00 0.00 0.00 0.00 00:18:20.500 00:18:21.433 [2024-12-06T14:32:28.403Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.433 Nvme0n1 : 6.00 8010.67 31.29 0.00 0.00 0.00 0.00 0.00 00:18:21.433 [2024-12-06T14:32:28.403Z] =================================================================================================================== 00:18:21.433 [2024-12-06T14:32:28.403Z] Total : 8010.67 31.29 0.00 0.00 0.00 0.00 0.00 00:18:21.433 00:18:22.413 [2024-12-06T14:32:29.383Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.413 Nvme0n1 : 7.00 7990.86 31.21 0.00 0.00 0.00 0.00 0.00 00:18:22.413 [2024-12-06T14:32:29.383Z] =================================================================================================================== 00:18:22.413 [2024-12-06T14:32:29.383Z] Total : 7990.86 31.21 0.00 0.00 0.00 0.00 0.00 00:18:22.413 00:18:23.786 [2024-12-06T14:32:30.756Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.786 Nvme0n1 : 8.00 7959.00 31.09 0.00 0.00 0.00 0.00 0.00 00:18:23.786 [2024-12-06T14:32:30.756Z] =================================================================================================================== 00:18:23.786 [2024-12-06T14:32:30.756Z] Total : 7959.00 31.09 0.00 0.00 0.00 0.00 0.00 00:18:23.786 00:18:24.721 [2024-12-06T14:32:31.691Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.721 Nvme0n1 : 9.00 7929.22 30.97 0.00 0.00 0.00 0.00 0.00 00:18:24.721 [2024-12-06T14:32:31.691Z] =================================================================================================================== 00:18:24.721 [2024-12-06T14:32:31.691Z] Total : 7929.22 30.97 0.00 0.00 0.00 0.00 0.00 00:18:24.721 00:18:25.657 [2024-12-06T14:32:32.627Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.657 Nvme0n1 : 10.00 7918.40 30.93 0.00 0.00 0.00 0.00 0.00 00:18:25.657 [2024-12-06T14:32:32.627Z] =================================================================================================================== 00:18:25.657 [2024-12-06T14:32:32.627Z] Total : 7918.40 30.93 0.00 0.00 0.00 0.00 0.00 00:18:25.657 00:18:25.657 00:18:25.657 Latency(us) 00:18:25.657 [2024-12-06T14:32:32.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.657 [2024-12-06T14:32:32.627Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.657 Nvme0n1 : 10.01 7921.42 30.94 0.00 0.00 16154.05 7536.64 72923.69 00:18:25.657 [2024-12-06T14:32:32.627Z] =================================================================================================================== 00:18:25.657 [2024-12-06T14:32:32.627Z] Total : 7921.42 30.94 0.00 0.00 16154.05 7536.64 72923.69 00:18:25.657 0 00:18:25.657 14:32:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73548 00:18:25.657 14:32:32 -- common/autotest_common.sh@936 -- # '[' -z 73548 ']' 00:18:25.657 14:32:32 -- common/autotest_common.sh@940 -- # kill -0 73548 00:18:25.657 14:32:32 -- common/autotest_common.sh@941 -- # uname 00:18:25.657 14:32:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.657 14:32:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73548 00:18:25.657 14:32:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:25.657 14:32:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:25.657 killing process with pid 73548 00:18:25.657 14:32:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73548' 00:18:25.657 14:32:32 -- common/autotest_common.sh@955 -- # kill 73548 00:18:25.657 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.657 00:18:25.657 Latency(us) 00:18:25.657 [2024-12-06T14:32:32.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.657 [2024-12-06T14:32:32.627Z] =================================================================================================================== 00:18:25.657 [2024-12-06T14:32:32.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.657 14:32:32 -- common/autotest_common.sh@960 -- # wait 73548 00:18:25.915 14:32:32 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:26.173 14:32:32 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:26.173 14:32:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:26.431 14:32:33 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:26.432 14:32:33 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:26.432 14:32:33 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:26.690 [2024-12-06 14:32:33.463213] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:26.690 14:32:33 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:26.690 14:32:33 -- common/autotest_common.sh@650 -- # local es=0 00:18:26.690 14:32:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:26.690 14:32:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.690 14:32:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.690 14:32:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.690 14:32:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.690 14:32:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.690 14:32:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.690 14:32:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.690 14:32:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:26.690 14:32:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:26.948 2024/12/06 14:32:33 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:101c8f0c-376f-471d-a840-a86117559717], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:18:26.948 request: 00:18:26.948 { 00:18:26.948 "method": "bdev_lvol_get_lvstores", 00:18:26.948 "params": { 00:18:26.948 "uuid": "101c8f0c-376f-471d-a840-a86117559717" 00:18:26.948 } 00:18:26.948 } 00:18:26.948 Got JSON-RPC error response 00:18:26.948 GoRPCClient: error on JSON-RPC call 00:18:26.948 14:32:33 -- common/autotest_common.sh@653 -- # es=1 00:18:26.948 14:32:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.948 14:32:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.948 14:32:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.948 14:32:33 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:27.206 aio_bdev 00:18:27.206 14:32:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 86e1ac6f-9c41-4f62-a190-3aa928baab57 00:18:27.206 14:32:34 -- common/autotest_common.sh@897 -- # local bdev_name=86e1ac6f-9c41-4f62-a190-3aa928baab57 00:18:27.206 14:32:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:27.206 14:32:34 -- common/autotest_common.sh@899 -- # local i 00:18:27.206 14:32:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:27.206 14:32:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:27.206 14:32:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:27.464 14:32:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 86e1ac6f-9c41-4f62-a190-3aa928baab57 -t 2000 00:18:27.723 [ 00:18:27.723 { 00:18:27.723 "aliases": [ 00:18:27.723 "lvs/lvol" 00:18:27.723 ], 00:18:27.723 "assigned_rate_limits": { 00:18:27.723 "r_mbytes_per_sec": 0, 00:18:27.723 "rw_ios_per_sec": 0, 00:18:27.723 "rw_mbytes_per_sec": 0, 00:18:27.723 "w_mbytes_per_sec": 0 00:18:27.723 }, 00:18:27.723 "block_size": 4096, 00:18:27.723 "claimed": false, 00:18:27.723 "driver_specific": { 00:18:27.723 "lvol": { 00:18:27.723 "base_bdev": "aio_bdev", 00:18:27.723 "clone": false, 00:18:27.723 "esnap_clone": false, 00:18:27.723 "lvol_store_uuid": "101c8f0c-376f-471d-a840-a86117559717", 00:18:27.723 "snapshot": false, 00:18:27.723 "thin_provision": false 00:18:27.724 } 00:18:27.724 }, 00:18:27.724 "name": "86e1ac6f-9c41-4f62-a190-3aa928baab57", 00:18:27.724 "num_blocks": 38912, 00:18:27.724 "product_name": "Logical Volume", 00:18:27.724 "supported_io_types": { 00:18:27.724 "abort": false, 00:18:27.724 "compare": false, 00:18:27.724 "compare_and_write": false, 00:18:27.724 "flush": false, 00:18:27.724 "nvme_admin": false, 00:18:27.724 "nvme_io": false, 00:18:27.724 "read": true, 00:18:27.724 "reset": true, 00:18:27.724 "unmap": true, 00:18:27.724 "write": true, 00:18:27.724 "write_zeroes": true 00:18:27.724 }, 00:18:27.724 "uuid": "86e1ac6f-9c41-4f62-a190-3aa928baab57", 00:18:27.724 "zoned": false 00:18:27.724 } 00:18:27.724 ] 00:18:27.724 14:32:34 -- common/autotest_common.sh@905 -- # return 0 00:18:27.724 14:32:34 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:27.724 14:32:34 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:27.982 14:32:34 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:27.982 14:32:34 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 101c8f0c-376f-471d-a840-a86117559717 00:18:27.982 14:32:34 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:28.251 14:32:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:28.251 14:32:35 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 86e1ac6f-9c41-4f62-a190-3aa928baab57 00:18:28.511 14:32:35 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 101c8f0c-376f-471d-a840-a86117559717 00:18:28.769 14:32:35 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:29.027 14:32:35 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:29.592 ************************************ 00:18:29.592 END TEST lvs_grow_clean 00:18:29.592 ************************************ 00:18:29.592 00:18:29.592 real 0m18.539s 00:18:29.592 user 0m17.913s 00:18:29.592 sys 0m2.197s 00:18:29.592 14:32:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:29.592 14:32:36 -- common/autotest_common.sh@10 -- # set +x 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:29.592 14:32:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:29.592 14:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.592 14:32:36 -- common/autotest_common.sh@10 -- # set +x 00:18:29.592 ************************************ 00:18:29.592 START TEST lvs_grow_dirty 00:18:29.592 ************************************ 00:18:29.592 14:32:36 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:29.592 14:32:36 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:29.851 14:32:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:29.851 14:32:36 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:30.110 14:32:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=14ce8e48-e925-41bd-80af-a745c277bfde 00:18:30.110 14:32:36 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:30.110 14:32:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:30.368 14:32:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:30.368 14:32:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:30.368 14:32:37 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14ce8e48-e925-41bd-80af-a745c277bfde lvol 150 00:18:30.626 14:32:37 -- target/nvmf_lvs_grow.sh@33 -- # lvol=661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:30.626 14:32:37 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:30.626 14:32:37 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:30.886 [2024-12-06 14:32:37.726330] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:30.886 [2024-12-06 14:32:37.726469] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:30.886 true 00:18:30.886 14:32:37 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:30.886 14:32:37 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:31.146 14:32:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:31.146 14:32:37 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:31.406 14:32:38 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:31.664 14:32:38 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:31.938 14:32:38 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:32.197 14:32:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73991 00:18:32.197 14:32:39 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:32.197 14:32:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.197 14:32:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73991 /var/tmp/bdevperf.sock 00:18:32.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.197 14:32:39 -- common/autotest_common.sh@829 -- # '[' -z 73991 ']' 00:18:32.197 14:32:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.197 14:32:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.197 14:32:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.197 14:32:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.197 14:32:39 -- common/autotest_common.sh@10 -- # set +x 00:18:32.197 [2024-12-06 14:32:39.092945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:32.197 [2024-12-06 14:32:39.093390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73991 ] 00:18:32.455 [2024-12-06 14:32:39.234133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.455 [2024-12-06 14:32:39.384231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.389 14:32:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.389 14:32:40 -- common/autotest_common.sh@862 -- # return 0 00:18:33.389 14:32:40 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:33.389 Nvme0n1 00:18:33.647 14:32:40 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:33.647 [ 00:18:33.647 { 00:18:33.647 "aliases": [ 00:18:33.647 "661362f9-0a9a-4521-a7ff-71abe7c0f201" 00:18:33.647 ], 00:18:33.647 "assigned_rate_limits": { 00:18:33.647 "r_mbytes_per_sec": 0, 00:18:33.647 "rw_ios_per_sec": 0, 00:18:33.647 "rw_mbytes_per_sec": 0, 00:18:33.647 "w_mbytes_per_sec": 0 00:18:33.647 }, 00:18:33.647 "block_size": 4096, 00:18:33.647 "claimed": false, 00:18:33.647 "driver_specific": { 00:18:33.647 "mp_policy": "active_passive", 00:18:33.647 "nvme": [ 00:18:33.647 { 00:18:33.647 "ctrlr_data": { 00:18:33.647 "ana_reporting": false, 00:18:33.647 "cntlid": 1, 00:18:33.647 "firmware_revision": "24.01.1", 00:18:33.647 "model_number": "SPDK bdev Controller", 00:18:33.647 "multi_ctrlr": true, 00:18:33.647 "oacs": { 00:18:33.647 "firmware": 0, 00:18:33.647 "format": 0, 00:18:33.647 "ns_manage": 0, 00:18:33.647 "security": 0 00:18:33.647 }, 00:18:33.647 "serial_number": "SPDK0", 00:18:33.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:33.647 "vendor_id": "0x8086" 00:18:33.647 }, 00:18:33.647 "ns_data": { 00:18:33.647 "can_share": true, 00:18:33.647 "id": 1 00:18:33.647 }, 00:18:33.647 "trid": { 00:18:33.647 "adrfam": "IPv4", 00:18:33.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:33.648 "traddr": "10.0.0.2", 00:18:33.648 "trsvcid": "4420", 00:18:33.648 "trtype": "TCP" 00:18:33.648 }, 00:18:33.648 "vs": { 00:18:33.648 "nvme_version": "1.3" 00:18:33.648 } 00:18:33.648 } 00:18:33.648 ] 00:18:33.648 }, 00:18:33.648 "name": "Nvme0n1", 00:18:33.648 "num_blocks": 38912, 00:18:33.648 "product_name": "NVMe disk", 00:18:33.648 "supported_io_types": { 00:18:33.648 "abort": true, 00:18:33.648 "compare": true, 00:18:33.648 "compare_and_write": true, 00:18:33.648 "flush": true, 00:18:33.648 "nvme_admin": true, 00:18:33.648 "nvme_io": true, 00:18:33.648 "read": true, 00:18:33.648 "reset": true, 00:18:33.648 "unmap": true, 00:18:33.648 "write": true, 00:18:33.648 "write_zeroes": true 00:18:33.648 }, 00:18:33.648 "uuid": "661362f9-0a9a-4521-a7ff-71abe7c0f201", 00:18:33.648 "zoned": false 00:18:33.648 } 00:18:33.648 ] 00:18:33.648 14:32:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74039 00:18:33.648 14:32:40 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.648 14:32:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:33.906 Running I/O for 10 seconds... 00:18:34.841 Latency(us) 00:18:34.841 [2024-12-06T14:32:41.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.841 [2024-12-06T14:32:41.811Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.841 Nvme0n1 : 1.00 7813.00 30.52 0.00 0.00 0.00 0.00 0.00 00:18:34.841 [2024-12-06T14:32:41.811Z] =================================================================================================================== 00:18:34.841 [2024-12-06T14:32:41.811Z] Total : 7813.00 30.52 0.00 0.00 0.00 0.00 0.00 00:18:34.841 00:18:35.776 14:32:42 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:35.776 [2024-12-06T14:32:42.746Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.776 Nvme0n1 : 2.00 8012.50 31.30 0.00 0.00 0.00 0.00 0.00 00:18:35.776 [2024-12-06T14:32:42.746Z] =================================================================================================================== 00:18:35.776 [2024-12-06T14:32:42.746Z] Total : 8012.50 31.30 0.00 0.00 0.00 0.00 0.00 00:18:35.776 00:18:36.080 true 00:18:36.080 14:32:42 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:36.080 14:32:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:36.339 14:32:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:36.339 14:32:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:36.339 14:32:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 74039 00:18:36.906 [2024-12-06T14:32:43.876Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.906 Nvme0n1 : 3.00 8033.67 31.38 0.00 0.00 0.00 0.00 0.00 00:18:36.906 [2024-12-06T14:32:43.876Z] =================================================================================================================== 00:18:36.906 [2024-12-06T14:32:43.876Z] Total : 8033.67 31.38 0.00 0.00 0.00 0.00 0.00 00:18:36.906 00:18:37.841 [2024-12-06T14:32:44.811Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.841 Nvme0n1 : 4.00 7998.00 31.24 0.00 0.00 0.00 0.00 0.00 00:18:37.841 [2024-12-06T14:32:44.811Z] =================================================================================================================== 00:18:37.841 [2024-12-06T14:32:44.811Z] Total : 7998.00 31.24 0.00 0.00 0.00 0.00 0.00 00:18:37.841 00:18:38.774 [2024-12-06T14:32:45.744Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.774 Nvme0n1 : 5.00 7973.00 31.14 0.00 0.00 0.00 0.00 0.00 00:18:38.774 [2024-12-06T14:32:45.744Z] =================================================================================================================== 00:18:38.774 [2024-12-06T14:32:45.744Z] Total : 7973.00 31.14 0.00 0.00 0.00 0.00 0.00 00:18:38.774 00:18:40.146 [2024-12-06T14:32:47.116Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.146 Nvme0n1 : 6.00 7931.00 30.98 0.00 0.00 0.00 0.00 0.00 00:18:40.146 [2024-12-06T14:32:47.116Z] =================================================================================================================== 00:18:40.146 [2024-12-06T14:32:47.116Z] Total : 7931.00 30.98 0.00 0.00 0.00 0.00 0.00 00:18:40.146 00:18:41.103 [2024-12-06T14:32:48.073Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.103 Nvme0n1 : 7.00 7561.71 29.54 0.00 0.00 0.00 0.00 0.00 00:18:41.103 [2024-12-06T14:32:48.073Z] =================================================================================================================== 00:18:41.103 [2024-12-06T14:32:48.073Z] Total : 7561.71 29.54 0.00 0.00 0.00 0.00 0.00 00:18:41.103 00:18:42.037 [2024-12-06T14:32:49.007Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.037 Nvme0n1 : 8.00 7530.25 29.42 0.00 0.00 0.00 0.00 0.00 00:18:42.037 [2024-12-06T14:32:49.007Z] =================================================================================================================== 00:18:42.037 [2024-12-06T14:32:49.007Z] Total : 7530.25 29.42 0.00 0.00 0.00 0.00 0.00 00:18:42.037 00:18:42.971 [2024-12-06T14:32:49.941Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.971 Nvme0n1 : 9.00 7484.00 29.23 0.00 0.00 0.00 0.00 0.00 00:18:42.971 [2024-12-06T14:32:49.941Z] =================================================================================================================== 00:18:42.971 [2024-12-06T14:32:49.941Z] Total : 7484.00 29.23 0.00 0.00 0.00 0.00 0.00 00:18:42.971 00:18:43.905 [2024-12-06T14:32:50.875Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.905 Nvme0n1 : 10.00 7453.50 29.12 0.00 0.00 0.00 0.00 0.00 00:18:43.905 [2024-12-06T14:32:50.875Z] =================================================================================================================== 00:18:43.905 [2024-12-06T14:32:50.875Z] Total : 7453.50 29.12 0.00 0.00 0.00 0.00 0.00 00:18:43.905 00:18:43.905 00:18:43.905 Latency(us) 00:18:43.905 [2024-12-06T14:32:50.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.905 [2024-12-06T14:32:50.875Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.905 Nvme0n1 : 10.00 7464.03 29.16 0.00 0.00 17143.76 5928.03 316479.30 00:18:43.905 [2024-12-06T14:32:50.875Z] =================================================================================================================== 00:18:43.905 [2024-12-06T14:32:50.875Z] Total : 7464.03 29.16 0.00 0.00 17143.76 5928.03 316479.30 00:18:43.905 0 00:18:43.905 14:32:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73991 00:18:43.905 14:32:50 -- common/autotest_common.sh@936 -- # '[' -z 73991 ']' 00:18:43.905 14:32:50 -- common/autotest_common.sh@940 -- # kill -0 73991 00:18:43.905 14:32:50 -- common/autotest_common.sh@941 -- # uname 00:18:43.905 14:32:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.905 14:32:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73991 00:18:43.905 14:32:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:43.905 14:32:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:43.905 killing process with pid 73991 00:18:43.905 14:32:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73991' 00:18:43.905 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.905 00:18:43.905 Latency(us) 00:18:43.905 [2024-12-06T14:32:50.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.905 [2024-12-06T14:32:50.875Z] =================================================================================================================== 00:18:43.905 [2024-12-06T14:32:50.875Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.905 14:32:50 -- common/autotest_common.sh@955 -- # kill 73991 00:18:43.905 14:32:50 -- common/autotest_common.sh@960 -- # wait 73991 00:18:44.163 14:32:51 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:44.422 14:32:51 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:44.422 14:32:51 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:44.685 14:32:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:44.686 14:32:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:44.686 14:32:51 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73386 00:18:44.686 14:32:51 -- target/nvmf_lvs_grow.sh@74 -- # wait 73386 00:18:44.949 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73386 Killed "${NVMF_APP[@]}" "$@" 00:18:44.949 14:32:51 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:44.949 14:32:51 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:44.949 14:32:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:44.949 14:32:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.949 14:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:44.949 14:32:51 -- nvmf/common.sh@469 -- # nvmfpid=74190 00:18:44.949 14:32:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:44.949 14:32:51 -- nvmf/common.sh@470 -- # waitforlisten 74190 00:18:44.949 14:32:51 -- common/autotest_common.sh@829 -- # '[' -z 74190 ']' 00:18:44.949 14:32:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.949 14:32:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.949 14:32:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.949 14:32:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.949 14:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:44.949 [2024-12-06 14:32:51.714369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:44.949 [2024-12-06 14:32:51.714527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.949 [2024-12-06 14:32:51.851536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.208 [2024-12-06 14:32:51.968316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:45.208 [2024-12-06 14:32:51.968511] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.208 [2024-12-06 14:32:51.968525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.208 [2024-12-06 14:32:51.968534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.208 [2024-12-06 14:32:51.968566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.775 14:32:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.775 14:32:52 -- common/autotest_common.sh@862 -- # return 0 00:18:45.775 14:32:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:45.775 14:32:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.775 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:46.034 14:32:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.034 14:32:52 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:46.316 [2024-12-06 14:32:53.042109] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:46.316 [2024-12-06 14:32:53.043240] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:46.316 [2024-12-06 14:32:53.044551] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:46.316 14:32:53 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:46.316 14:32:53 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:46.316 14:32:53 -- common/autotest_common.sh@897 -- # local bdev_name=661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:46.316 14:32:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:46.316 14:32:53 -- common/autotest_common.sh@899 -- # local i 00:18:46.316 14:32:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:46.316 14:32:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:46.316 14:32:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:46.573 14:32:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 661362f9-0a9a-4521-a7ff-71abe7c0f201 -t 2000 00:18:46.832 [ 00:18:46.832 { 00:18:46.832 "aliases": [ 00:18:46.832 "lvs/lvol" 00:18:46.832 ], 00:18:46.832 "assigned_rate_limits": { 00:18:46.832 "r_mbytes_per_sec": 0, 00:18:46.832 "rw_ios_per_sec": 0, 00:18:46.832 "rw_mbytes_per_sec": 0, 00:18:46.832 "w_mbytes_per_sec": 0 00:18:46.832 }, 00:18:46.832 "block_size": 4096, 00:18:46.832 "claimed": false, 00:18:46.832 "driver_specific": { 00:18:46.832 "lvol": { 00:18:46.832 "base_bdev": "aio_bdev", 00:18:46.832 "clone": false, 00:18:46.832 "esnap_clone": false, 00:18:46.832 "lvol_store_uuid": "14ce8e48-e925-41bd-80af-a745c277bfde", 00:18:46.832 "snapshot": false, 00:18:46.832 "thin_provision": false 00:18:46.832 } 00:18:46.832 }, 00:18:46.832 "name": "661362f9-0a9a-4521-a7ff-71abe7c0f201", 00:18:46.832 "num_blocks": 38912, 00:18:46.832 "product_name": "Logical Volume", 00:18:46.832 "supported_io_types": { 00:18:46.832 "abort": false, 00:18:46.832 "compare": false, 00:18:46.832 "compare_and_write": false, 00:18:46.832 "flush": false, 00:18:46.832 "nvme_admin": false, 00:18:46.832 "nvme_io": false, 00:18:46.832 "read": true, 00:18:46.832 "reset": true, 00:18:46.832 "unmap": true, 00:18:46.832 "write": true, 00:18:46.832 "write_zeroes": true 00:18:46.832 }, 00:18:46.832 "uuid": "661362f9-0a9a-4521-a7ff-71abe7c0f201", 00:18:46.832 "zoned": false 00:18:46.832 } 00:18:46.832 ] 00:18:46.832 14:32:53 -- common/autotest_common.sh@905 -- # return 0 00:18:46.832 14:32:53 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:46.832 14:32:53 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:47.091 14:32:53 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:47.092 14:32:53 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:47.092 14:32:53 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:47.350 14:32:54 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:47.350 14:32:54 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:47.609 [2024-12-06 14:32:54.363390] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:47.609 14:32:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:47.609 14:32:54 -- common/autotest_common.sh@650 -- # local es=0 00:18:47.609 14:32:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:47.609 14:32:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.609 14:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.609 14:32:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.609 14:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.609 14:32:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.609 14:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.609 14:32:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.609 14:32:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:47.609 14:32:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:47.868 2024/12/06 14:32:54 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:14ce8e48-e925-41bd-80af-a745c277bfde], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:18:47.868 request: 00:18:47.868 { 00:18:47.868 "method": "bdev_lvol_get_lvstores", 00:18:47.868 "params": { 00:18:47.868 "uuid": "14ce8e48-e925-41bd-80af-a745c277bfde" 00:18:47.868 } 00:18:47.868 } 00:18:47.868 Got JSON-RPC error response 00:18:47.868 GoRPCClient: error on JSON-RPC call 00:18:47.868 14:32:54 -- common/autotest_common.sh@653 -- # es=1 00:18:47.868 14:32:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.868 14:32:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.868 14:32:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.868 14:32:54 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:48.127 aio_bdev 00:18:48.127 14:32:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:48.127 14:32:54 -- common/autotest_common.sh@897 -- # local bdev_name=661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:48.127 14:32:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:48.127 14:32:54 -- common/autotest_common.sh@899 -- # local i 00:18:48.127 14:32:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:48.127 14:32:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:48.127 14:32:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:48.385 14:32:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 661362f9-0a9a-4521-a7ff-71abe7c0f201 -t 2000 00:18:48.643 [ 00:18:48.643 { 00:18:48.643 "aliases": [ 00:18:48.643 "lvs/lvol" 00:18:48.643 ], 00:18:48.643 "assigned_rate_limits": { 00:18:48.643 "r_mbytes_per_sec": 0, 00:18:48.643 "rw_ios_per_sec": 0, 00:18:48.643 "rw_mbytes_per_sec": 0, 00:18:48.643 "w_mbytes_per_sec": 0 00:18:48.643 }, 00:18:48.643 "block_size": 4096, 00:18:48.643 "claimed": false, 00:18:48.643 "driver_specific": { 00:18:48.643 "lvol": { 00:18:48.643 "base_bdev": "aio_bdev", 00:18:48.643 "clone": false, 00:18:48.643 "esnap_clone": false, 00:18:48.643 "lvol_store_uuid": "14ce8e48-e925-41bd-80af-a745c277bfde", 00:18:48.643 "snapshot": false, 00:18:48.643 "thin_provision": false 00:18:48.643 } 00:18:48.643 }, 00:18:48.643 "name": "661362f9-0a9a-4521-a7ff-71abe7c0f201", 00:18:48.643 "num_blocks": 38912, 00:18:48.643 "product_name": "Logical Volume", 00:18:48.643 "supported_io_types": { 00:18:48.643 "abort": false, 00:18:48.643 "compare": false, 00:18:48.643 "compare_and_write": false, 00:18:48.643 "flush": false, 00:18:48.643 "nvme_admin": false, 00:18:48.643 "nvme_io": false, 00:18:48.643 "read": true, 00:18:48.643 "reset": true, 00:18:48.643 "unmap": true, 00:18:48.643 "write": true, 00:18:48.643 "write_zeroes": true 00:18:48.643 }, 00:18:48.643 "uuid": "661362f9-0a9a-4521-a7ff-71abe7c0f201", 00:18:48.643 "zoned": false 00:18:48.643 } 00:18:48.643 ] 00:18:48.643 14:32:55 -- common/autotest_common.sh@905 -- # return 0 00:18:48.643 14:32:55 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:48.643 14:32:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:48.901 14:32:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:48.901 14:32:55 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:48.901 14:32:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:49.158 14:32:56 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:49.158 14:32:56 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 661362f9-0a9a-4521-a7ff-71abe7c0f201 00:18:49.416 14:32:56 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14ce8e48-e925-41bd-80af-a745c277bfde 00:18:49.674 14:32:56 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:49.933 14:32:56 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:50.504 00:18:50.504 real 0m20.789s 00:18:50.504 user 0m42.566s 00:18:50.504 sys 0m8.065s 00:18:50.504 14:32:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:50.504 14:32:57 -- common/autotest_common.sh@10 -- # set +x 00:18:50.504 ************************************ 00:18:50.504 END TEST lvs_grow_dirty 00:18:50.504 ************************************ 00:18:50.504 14:32:57 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:50.504 14:32:57 -- common/autotest_common.sh@806 -- # type=--id 00:18:50.504 14:32:57 -- common/autotest_common.sh@807 -- # id=0 00:18:50.504 14:32:57 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:50.504 14:32:57 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:50.504 14:32:57 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:50.504 14:32:57 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:50.504 14:32:57 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:50.504 14:32:57 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:50.504 nvmf_trace.0 00:18:50.504 14:32:57 -- common/autotest_common.sh@821 -- # return 0 00:18:50.504 14:32:57 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:50.504 14:32:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:50.504 14:32:57 -- nvmf/common.sh@116 -- # sync 00:18:50.762 14:32:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:50.762 14:32:57 -- nvmf/common.sh@119 -- # set +e 00:18:50.762 14:32:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:50.762 14:32:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:50.762 rmmod nvme_tcp 00:18:50.763 rmmod nvme_fabrics 00:18:50.763 rmmod nvme_keyring 00:18:50.763 14:32:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:50.763 14:32:57 -- nvmf/common.sh@123 -- # set -e 00:18:50.763 14:32:57 -- nvmf/common.sh@124 -- # return 0 00:18:50.763 14:32:57 -- nvmf/common.sh@477 -- # '[' -n 74190 ']' 00:18:50.763 14:32:57 -- nvmf/common.sh@478 -- # killprocess 74190 00:18:50.763 14:32:57 -- common/autotest_common.sh@936 -- # '[' -z 74190 ']' 00:18:50.763 14:32:57 -- common/autotest_common.sh@940 -- # kill -0 74190 00:18:50.763 14:32:57 -- common/autotest_common.sh@941 -- # uname 00:18:50.763 14:32:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.763 14:32:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74190 00:18:50.763 14:32:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:50.763 14:32:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:50.763 killing process with pid 74190 00:18:50.763 14:32:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74190' 00:18:50.763 14:32:57 -- common/autotest_common.sh@955 -- # kill 74190 00:18:50.763 14:32:57 -- common/autotest_common.sh@960 -- # wait 74190 00:18:51.021 14:32:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:51.021 14:32:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:51.021 14:32:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:51.021 14:32:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.021 14:32:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:51.021 14:32:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.021 14:32:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.021 14:32:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.021 14:32:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:51.021 00:18:51.021 real 0m42.101s 00:18:51.021 user 1m7.396s 00:18:51.021 sys 0m11.158s 00:18:51.021 14:32:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:51.021 14:32:57 -- common/autotest_common.sh@10 -- # set +x 00:18:51.021 ************************************ 00:18:51.021 END TEST nvmf_lvs_grow 00:18:51.021 ************************************ 00:18:51.280 14:32:57 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:51.280 14:32:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:51.280 14:32:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.280 14:32:57 -- common/autotest_common.sh@10 -- # set +x 00:18:51.280 ************************************ 00:18:51.280 START TEST nvmf_bdev_io_wait 00:18:51.280 ************************************ 00:18:51.280 14:32:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:51.280 * Looking for test storage... 00:18:51.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:51.280 14:32:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:51.280 14:32:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:51.280 14:32:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:51.280 14:32:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:51.280 14:32:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:51.280 14:32:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:51.280 14:32:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:51.280 14:32:58 -- scripts/common.sh@335 -- # IFS=.-: 00:18:51.280 14:32:58 -- scripts/common.sh@335 -- # read -ra ver1 00:18:51.280 14:32:58 -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.280 14:32:58 -- scripts/common.sh@336 -- # read -ra ver2 00:18:51.280 14:32:58 -- scripts/common.sh@337 -- # local 'op=<' 00:18:51.280 14:32:58 -- scripts/common.sh@339 -- # ver1_l=2 00:18:51.280 14:32:58 -- scripts/common.sh@340 -- # ver2_l=1 00:18:51.280 14:32:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:51.280 14:32:58 -- scripts/common.sh@343 -- # case "$op" in 00:18:51.280 14:32:58 -- scripts/common.sh@344 -- # : 1 00:18:51.280 14:32:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:51.280 14:32:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.280 14:32:58 -- scripts/common.sh@364 -- # decimal 1 00:18:51.280 14:32:58 -- scripts/common.sh@352 -- # local d=1 00:18:51.280 14:32:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.280 14:32:58 -- scripts/common.sh@354 -- # echo 1 00:18:51.280 14:32:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:51.280 14:32:58 -- scripts/common.sh@365 -- # decimal 2 00:18:51.280 14:32:58 -- scripts/common.sh@352 -- # local d=2 00:18:51.280 14:32:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.280 14:32:58 -- scripts/common.sh@354 -- # echo 2 00:18:51.280 14:32:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:51.280 14:32:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:51.280 14:32:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:51.280 14:32:58 -- scripts/common.sh@367 -- # return 0 00:18:51.280 14:32:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.280 14:32:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.280 --rc genhtml_branch_coverage=1 00:18:51.280 --rc genhtml_function_coverage=1 00:18:51.280 --rc genhtml_legend=1 00:18:51.280 --rc geninfo_all_blocks=1 00:18:51.280 --rc geninfo_unexecuted_blocks=1 00:18:51.280 00:18:51.280 ' 00:18:51.280 14:32:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.280 --rc genhtml_branch_coverage=1 00:18:51.280 --rc genhtml_function_coverage=1 00:18:51.280 --rc genhtml_legend=1 00:18:51.280 --rc geninfo_all_blocks=1 00:18:51.280 --rc geninfo_unexecuted_blocks=1 00:18:51.280 00:18:51.280 ' 00:18:51.280 14:32:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.280 --rc genhtml_branch_coverage=1 00:18:51.280 --rc genhtml_function_coverage=1 00:18:51.280 --rc genhtml_legend=1 00:18:51.280 --rc geninfo_all_blocks=1 00:18:51.280 --rc geninfo_unexecuted_blocks=1 00:18:51.280 00:18:51.280 ' 00:18:51.280 14:32:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.280 --rc genhtml_branch_coverage=1 00:18:51.280 --rc genhtml_function_coverage=1 00:18:51.280 --rc genhtml_legend=1 00:18:51.280 --rc geninfo_all_blocks=1 00:18:51.280 --rc geninfo_unexecuted_blocks=1 00:18:51.280 00:18:51.280 ' 00:18:51.280 14:32:58 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:51.280 14:32:58 -- nvmf/common.sh@7 -- # uname -s 00:18:51.280 14:32:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.280 14:32:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.280 14:32:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.280 14:32:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.280 14:32:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.280 14:32:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.280 14:32:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.280 14:32:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.280 14:32:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.280 14:32:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.280 14:32:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:18:51.280 14:32:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:18:51.280 14:32:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.280 14:32:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.280 14:32:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:51.280 14:32:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:51.280 14:32:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.280 14:32:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.280 14:32:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.280 14:32:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.281 14:32:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.281 14:32:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.281 14:32:58 -- paths/export.sh@5 -- # export PATH 00:18:51.281 14:32:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.281 14:32:58 -- nvmf/common.sh@46 -- # : 0 00:18:51.281 14:32:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:51.281 14:32:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:51.281 14:32:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:51.281 14:32:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.281 14:32:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.281 14:32:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:51.281 14:32:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:51.281 14:32:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:51.281 14:32:58 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.281 14:32:58 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:51.281 14:32:58 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:51.281 14:32:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:51.281 14:32:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.281 14:32:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:51.281 14:32:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:51.281 14:32:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:51.281 14:32:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.281 14:32:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.281 14:32:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.281 14:32:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:51.281 14:32:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:51.281 14:32:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:51.281 14:32:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:51.281 14:32:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:51.281 14:32:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:51.281 14:32:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.281 14:32:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.281 14:32:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:51.281 14:32:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:51.281 14:32:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:51.281 14:32:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:51.281 14:32:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:51.281 14:32:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.281 14:32:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:51.281 14:32:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:51.281 14:32:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:51.281 14:32:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:51.281 14:32:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:51.281 14:32:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:51.281 Cannot find device "nvmf_tgt_br" 00:18:51.281 14:32:58 -- nvmf/common.sh@154 -- # true 00:18:51.281 14:32:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.539 Cannot find device "nvmf_tgt_br2" 00:18:51.539 14:32:58 -- nvmf/common.sh@155 -- # true 00:18:51.539 14:32:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:51.539 14:32:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:51.539 Cannot find device "nvmf_tgt_br" 00:18:51.539 14:32:58 -- nvmf/common.sh@157 -- # true 00:18:51.539 14:32:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:51.539 Cannot find device "nvmf_tgt_br2" 00:18:51.539 14:32:58 -- nvmf/common.sh@158 -- # true 00:18:51.539 14:32:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:51.539 14:32:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:51.539 14:32:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.539 14:32:58 -- nvmf/common.sh@161 -- # true 00:18:51.539 14:32:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.539 14:32:58 -- nvmf/common.sh@162 -- # true 00:18:51.539 14:32:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:51.539 14:32:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:51.539 14:32:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:51.539 14:32:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:51.539 14:32:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:51.539 14:32:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:51.539 14:32:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:51.539 14:32:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:51.539 14:32:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:51.539 14:32:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:51.539 14:32:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:51.539 14:32:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:51.539 14:32:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:51.539 14:32:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:51.539 14:32:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:51.539 14:32:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:51.539 14:32:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:51.539 14:32:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:51.539 14:32:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:51.539 14:32:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:51.539 14:32:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:51.798 14:32:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:51.798 14:32:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:51.798 14:32:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:51.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:18:51.798 00:18:51.798 --- 10.0.0.2 ping statistics --- 00:18:51.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.798 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:51.798 14:32:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:51.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:51.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:51.798 00:18:51.798 --- 10.0.0.3 ping statistics --- 00:18:51.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.798 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:51.798 14:32:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:51.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:51.798 00:18:51.798 --- 10.0.0.1 ping statistics --- 00:18:51.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.798 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:51.798 14:32:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.798 14:32:58 -- nvmf/common.sh@421 -- # return 0 00:18:51.798 14:32:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:51.798 14:32:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.798 14:32:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:51.798 14:32:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:51.798 14:32:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.798 14:32:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:51.798 14:32:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:51.798 14:32:58 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:51.798 14:32:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:51.798 14:32:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.798 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:18:51.798 14:32:58 -- nvmf/common.sh@469 -- # nvmfpid=74618 00:18:51.798 14:32:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:51.798 14:32:58 -- nvmf/common.sh@470 -- # waitforlisten 74618 00:18:51.798 14:32:58 -- common/autotest_common.sh@829 -- # '[' -z 74618 ']' 00:18:51.798 14:32:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.798 14:32:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.798 14:32:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.798 14:32:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.798 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:18:51.798 [2024-12-06 14:32:58.622533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:51.798 [2024-12-06 14:32:58.622648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.798 [2024-12-06 14:32:58.763224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.056 [2024-12-06 14:32:58.891909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:52.056 [2024-12-06 14:32:58.892131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.056 [2024-12-06 14:32:58.892145] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.056 [2024-12-06 14:32:58.892154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.056 [2024-12-06 14:32:58.892333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.056 [2024-12-06 14:32:58.892578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.056 [2024-12-06 14:32:58.892699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.056 [2024-12-06 14:32:58.892705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.991 14:32:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.991 14:32:59 -- common/autotest_common.sh@862 -- # return 0 00:18:52.991 14:32:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:52.991 14:32:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.991 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 14:32:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 [2024-12-06 14:32:59.733854] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 Malloc0 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.992 14:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.992 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:52.992 [2024-12-06 14:32:59.789959] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.992 14:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74671 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # config=() 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:52.992 14:32:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@30 -- # READ_PID=74673 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.992 { 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme$subsystem", 00:18:52.992 "trtype": "$TEST_TRANSPORT", 00:18:52.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "$NVMF_PORT", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.992 "hdgst": ${hdgst:-false}, 00:18:52.992 "ddgst": ${ddgst:-false} 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 } 00:18:52.992 EOF 00:18:52.992 )") 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74675 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # config=() 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:52.992 14:32:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.992 { 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme$subsystem", 00:18:52.992 "trtype": "$TEST_TRANSPORT", 00:18:52.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "$NVMF_PORT", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.992 "hdgst": ${hdgst:-false}, 00:18:52.992 "ddgst": ${ddgst:-false} 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 } 00:18:52.992 EOF 00:18:52.992 )") 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # cat 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74678 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@35 -- # sync 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # cat 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:52.992 14:32:59 -- nvmf/common.sh@544 -- # jq . 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # config=() 00:18:52.992 14:32:59 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # config=() 00:18:52.992 14:32:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:52.992 14:32:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.992 14:32:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.992 { 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme$subsystem", 00:18:52.992 "trtype": "$TEST_TRANSPORT", 00:18:52.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "$NVMF_PORT", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.992 "hdgst": ${hdgst:-false}, 00:18:52.992 "ddgst": ${ddgst:-false} 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 } 00:18:52.992 EOF 00:18:52.992 )") 00:18:52.992 14:32:59 -- nvmf/common.sh@544 -- # jq . 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.992 { 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme$subsystem", 00:18:52.992 "trtype": "$TEST_TRANSPORT", 00:18:52.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "$NVMF_PORT", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.992 "hdgst": ${hdgst:-false}, 00:18:52.992 "ddgst": ${ddgst:-false} 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 } 00:18:52.992 EOF 00:18:52.992 )") 00:18:52.992 14:32:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:52.992 14:32:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme1", 00:18:52.992 "trtype": "tcp", 00:18:52.992 "traddr": "10.0.0.2", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "4420", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.992 "hdgst": false, 00:18:52.992 "ddgst": false 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 }' 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # cat 00:18:52.992 14:32:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:52.992 14:32:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme1", 00:18:52.992 "trtype": "tcp", 00:18:52.992 "traddr": "10.0.0.2", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "4420", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.992 "hdgst": false, 00:18:52.992 "ddgst": false 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 }' 00:18:52.992 14:32:59 -- nvmf/common.sh@542 -- # cat 00:18:52.992 14:32:59 -- nvmf/common.sh@544 -- # jq . 00:18:52.992 14:32:59 -- nvmf/common.sh@544 -- # jq . 00:18:52.992 14:32:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:52.992 14:32:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme1", 00:18:52.992 "trtype": "tcp", 00:18:52.992 "traddr": "10.0.0.2", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "4420", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.992 "hdgst": false, 00:18:52.992 "ddgst": false 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 }' 00:18:52.992 14:32:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:52.992 14:32:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:52.992 "params": { 00:18:52.992 "name": "Nvme1", 00:18:52.992 "trtype": "tcp", 00:18:52.992 "traddr": "10.0.0.2", 00:18:52.992 "adrfam": "ipv4", 00:18:52.992 "trsvcid": "4420", 00:18:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.992 "hdgst": false, 00:18:52.992 "ddgst": false 00:18:52.992 }, 00:18:52.992 "method": "bdev_nvme_attach_controller" 00:18:52.992 }' 00:18:52.992 [2024-12-06 14:32:59.849784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:52.992 [2024-12-06 14:32:59.850432] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:52.993 [2024-12-06 14:32:59.853995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:52.993 [2024-12-06 14:32:59.854488] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:52.993 14:32:59 -- target/bdev_io_wait.sh@37 -- # wait 74671 00:18:52.993 [2024-12-06 14:32:59.870450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:52.993 [2024-12-06 14:32:59.870531] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:52.993 [2024-12-06 14:32:59.875957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:52.993 [2024-12-06 14:32:59.876395] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:53.251 [2024-12-06 14:33:00.067088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.251 [2024-12-06 14:33:00.148158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.251 [2024-12-06 14:33:00.172775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:53.509 [2024-12-06 14:33:00.225699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.509 [2024-12-06 14:33:00.249494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:53.509 [2024-12-06 14:33:00.305669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.509 Running I/O for 1 seconds... 00:18:53.509 [2024-12-06 14:33:00.326458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:53.509 Running I/O for 1 seconds... 00:18:53.509 [2024-12-06 14:33:00.405894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:53.509 Running I/O for 1 seconds... 00:18:53.767 Running I/O for 1 seconds... 00:18:54.703 00:18:54.703 Latency(us) 00:18:54.703 [2024-12-06T14:33:01.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.703 [2024-12-06T14:33:01.673Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:54.703 Nvme1n1 : 1.00 198049.61 773.63 0.00 0.00 643.58 256.93 949.53 00:18:54.703 [2024-12-06T14:33:01.673Z] =================================================================================================================== 00:18:54.703 [2024-12-06T14:33:01.673Z] Total : 198049.61 773.63 0.00 0.00 643.58 256.93 949.53 00:18:54.703 00:18:54.703 Latency(us) 00:18:54.703 [2024-12-06T14:33:01.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.703 [2024-12-06T14:33:01.673Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:54.703 Nvme1n1 : 1.01 10967.53 42.84 0.00 0.00 11626.05 2666.12 13583.83 00:18:54.703 [2024-12-06T14:33:01.673Z] =================================================================================================================== 00:18:54.703 [2024-12-06T14:33:01.673Z] Total : 10967.53 42.84 0.00 0.00 11626.05 2666.12 13583.83 00:18:54.703 00:18:54.703 Latency(us) 00:18:54.703 [2024-12-06T14:33:01.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.703 [2024-12-06T14:33:01.673Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:54.704 Nvme1n1 : 1.01 8378.65 32.73 0.00 0.00 15209.38 7923.90 28478.37 00:18:54.704 [2024-12-06T14:33:01.674Z] =================================================================================================================== 00:18:54.704 [2024-12-06T14:33:01.674Z] Total : 8378.65 32.73 0.00 0.00 15209.38 7923.90 28478.37 00:18:54.704 00:18:54.704 Latency(us) 00:18:54.704 [2024-12-06T14:33:01.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.704 [2024-12-06T14:33:01.674Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:54.704 Nvme1n1 : 1.01 7971.98 31.14 0.00 0.00 15977.95 9055.88 27525.12 00:18:54.704 [2024-12-06T14:33:01.674Z] =================================================================================================================== 00:18:54.704 [2024-12-06T14:33:01.674Z] Total : 7971.98 31.14 0.00 0.00 15977.95 9055.88 27525.12 00:18:54.963 14:33:01 -- target/bdev_io_wait.sh@38 -- # wait 74673 00:18:54.963 14:33:01 -- target/bdev_io_wait.sh@39 -- # wait 74675 00:18:54.963 14:33:01 -- target/bdev_io_wait.sh@40 -- # wait 74678 00:18:54.963 14:33:01 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.963 14:33:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.963 14:33:01 -- common/autotest_common.sh@10 -- # set +x 00:18:54.963 14:33:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.963 14:33:01 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:54.963 14:33:01 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:54.963 14:33:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:54.963 14:33:01 -- nvmf/common.sh@116 -- # sync 00:18:54.963 14:33:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:54.963 14:33:01 -- nvmf/common.sh@119 -- # set +e 00:18:54.963 14:33:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:54.963 14:33:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:54.963 rmmod nvme_tcp 00:18:54.963 rmmod nvme_fabrics 00:18:55.222 rmmod nvme_keyring 00:18:55.222 14:33:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:55.222 14:33:01 -- nvmf/common.sh@123 -- # set -e 00:18:55.222 14:33:01 -- nvmf/common.sh@124 -- # return 0 00:18:55.222 14:33:01 -- nvmf/common.sh@477 -- # '[' -n 74618 ']' 00:18:55.222 14:33:01 -- nvmf/common.sh@478 -- # killprocess 74618 00:18:55.222 14:33:01 -- common/autotest_common.sh@936 -- # '[' -z 74618 ']' 00:18:55.222 14:33:01 -- common/autotest_common.sh@940 -- # kill -0 74618 00:18:55.222 14:33:01 -- common/autotest_common.sh@941 -- # uname 00:18:55.222 14:33:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:55.222 14:33:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74618 00:18:55.222 killing process with pid 74618 00:18:55.222 14:33:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:55.222 14:33:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:55.222 14:33:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74618' 00:18:55.222 14:33:01 -- common/autotest_common.sh@955 -- # kill 74618 00:18:55.222 14:33:01 -- common/autotest_common.sh@960 -- # wait 74618 00:18:55.481 14:33:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:55.481 14:33:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:55.481 14:33:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:55.481 14:33:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.481 14:33:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:55.481 14:33:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.481 14:33:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.481 14:33:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.481 14:33:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:55.481 00:18:55.481 real 0m4.267s 00:18:55.481 user 0m18.353s 00:18:55.481 sys 0m2.171s 00:18:55.481 14:33:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:55.481 14:33:02 -- common/autotest_common.sh@10 -- # set +x 00:18:55.481 ************************************ 00:18:55.481 END TEST nvmf_bdev_io_wait 00:18:55.481 ************************************ 00:18:55.481 14:33:02 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:55.481 14:33:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:55.481 14:33:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.481 14:33:02 -- common/autotest_common.sh@10 -- # set +x 00:18:55.481 ************************************ 00:18:55.481 START TEST nvmf_queue_depth 00:18:55.481 ************************************ 00:18:55.481 14:33:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:55.481 * Looking for test storage... 00:18:55.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:55.481 14:33:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:55.481 14:33:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:55.481 14:33:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:55.740 14:33:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:55.740 14:33:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:55.740 14:33:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:55.740 14:33:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:55.740 14:33:02 -- scripts/common.sh@335 -- # IFS=.-: 00:18:55.740 14:33:02 -- scripts/common.sh@335 -- # read -ra ver1 00:18:55.740 14:33:02 -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.740 14:33:02 -- scripts/common.sh@336 -- # read -ra ver2 00:18:55.740 14:33:02 -- scripts/common.sh@337 -- # local 'op=<' 00:18:55.740 14:33:02 -- scripts/common.sh@339 -- # ver1_l=2 00:18:55.740 14:33:02 -- scripts/common.sh@340 -- # ver2_l=1 00:18:55.740 14:33:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:55.740 14:33:02 -- scripts/common.sh@343 -- # case "$op" in 00:18:55.740 14:33:02 -- scripts/common.sh@344 -- # : 1 00:18:55.740 14:33:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:55.740 14:33:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.740 14:33:02 -- scripts/common.sh@364 -- # decimal 1 00:18:55.740 14:33:02 -- scripts/common.sh@352 -- # local d=1 00:18:55.740 14:33:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.740 14:33:02 -- scripts/common.sh@354 -- # echo 1 00:18:55.740 14:33:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:55.740 14:33:02 -- scripts/common.sh@365 -- # decimal 2 00:18:55.740 14:33:02 -- scripts/common.sh@352 -- # local d=2 00:18:55.740 14:33:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.740 14:33:02 -- scripts/common.sh@354 -- # echo 2 00:18:55.740 14:33:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:55.740 14:33:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:55.740 14:33:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:55.740 14:33:02 -- scripts/common.sh@367 -- # return 0 00:18:55.740 14:33:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.740 14:33:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.741 --rc genhtml_branch_coverage=1 00:18:55.741 --rc genhtml_function_coverage=1 00:18:55.741 --rc genhtml_legend=1 00:18:55.741 --rc geninfo_all_blocks=1 00:18:55.741 --rc geninfo_unexecuted_blocks=1 00:18:55.741 00:18:55.741 ' 00:18:55.741 14:33:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.741 --rc genhtml_branch_coverage=1 00:18:55.741 --rc genhtml_function_coverage=1 00:18:55.741 --rc genhtml_legend=1 00:18:55.741 --rc geninfo_all_blocks=1 00:18:55.741 --rc geninfo_unexecuted_blocks=1 00:18:55.741 00:18:55.741 ' 00:18:55.741 14:33:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.741 --rc genhtml_branch_coverage=1 00:18:55.741 --rc genhtml_function_coverage=1 00:18:55.741 --rc genhtml_legend=1 00:18:55.741 --rc geninfo_all_blocks=1 00:18:55.741 --rc geninfo_unexecuted_blocks=1 00:18:55.741 00:18:55.741 ' 00:18:55.741 14:33:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.741 --rc genhtml_branch_coverage=1 00:18:55.741 --rc genhtml_function_coverage=1 00:18:55.741 --rc genhtml_legend=1 00:18:55.741 --rc geninfo_all_blocks=1 00:18:55.741 --rc geninfo_unexecuted_blocks=1 00:18:55.741 00:18:55.741 ' 00:18:55.741 14:33:02 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.741 14:33:02 -- nvmf/common.sh@7 -- # uname -s 00:18:55.741 14:33:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.741 14:33:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.741 14:33:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.741 14:33:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.741 14:33:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.741 14:33:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.741 14:33:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.741 14:33:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.741 14:33:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.741 14:33:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.741 14:33:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:18:55.741 14:33:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:18:55.741 14:33:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.741 14:33:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.741 14:33:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.741 14:33:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.741 14:33:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.741 14:33:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.741 14:33:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.741 14:33:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.741 14:33:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.741 14:33:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.741 14:33:02 -- paths/export.sh@5 -- # export PATH 00:18:55.741 14:33:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.741 14:33:02 -- nvmf/common.sh@46 -- # : 0 00:18:55.741 14:33:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:55.741 14:33:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:55.741 14:33:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:55.741 14:33:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.741 14:33:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.741 14:33:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:55.741 14:33:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:55.741 14:33:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:55.741 14:33:02 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:55.741 14:33:02 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:55.741 14:33:02 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.741 14:33:02 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:55.741 14:33:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:55.741 14:33:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.741 14:33:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:55.741 14:33:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:55.741 14:33:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:55.741 14:33:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.741 14:33:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.741 14:33:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.741 14:33:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:55.741 14:33:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:55.741 14:33:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:55.741 14:33:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:55.741 14:33:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:55.741 14:33:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:55.741 14:33:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.741 14:33:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.741 14:33:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:55.741 14:33:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:55.741 14:33:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.741 14:33:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.741 14:33:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.741 14:33:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.741 14:33:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.741 14:33:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.741 14:33:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.741 14:33:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.741 14:33:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:55.741 14:33:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:55.741 Cannot find device "nvmf_tgt_br" 00:18:55.741 14:33:02 -- nvmf/common.sh@154 -- # true 00:18:55.741 14:33:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.741 Cannot find device "nvmf_tgt_br2" 00:18:55.741 14:33:02 -- nvmf/common.sh@155 -- # true 00:18:55.741 14:33:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:55.741 14:33:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:55.741 Cannot find device "nvmf_tgt_br" 00:18:55.741 14:33:02 -- nvmf/common.sh@157 -- # true 00:18:55.741 14:33:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:55.741 Cannot find device "nvmf_tgt_br2" 00:18:55.741 14:33:02 -- nvmf/common.sh@158 -- # true 00:18:55.741 14:33:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:55.741 14:33:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:55.741 14:33:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.741 14:33:02 -- nvmf/common.sh@161 -- # true 00:18:55.741 14:33:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.742 14:33:02 -- nvmf/common.sh@162 -- # true 00:18:55.742 14:33:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.742 14:33:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.742 14:33:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.742 14:33:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.742 14:33:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.742 14:33:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.742 14:33:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.742 14:33:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:55.742 14:33:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:55.742 14:33:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:55.742 14:33:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:55.742 14:33:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:55.742 14:33:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:55.742 14:33:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:56.000 14:33:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:56.000 14:33:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:56.000 14:33:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:56.000 14:33:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:56.000 14:33:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:56.000 14:33:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:56.000 14:33:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:56.000 14:33:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:56.000 14:33:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:56.000 14:33:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:56.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:56.000 00:18:56.000 --- 10.0.0.2 ping statistics --- 00:18:56.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.000 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:56.000 14:33:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:56.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:56.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:18:56.000 00:18:56.000 --- 10.0.0.3 ping statistics --- 00:18:56.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.000 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:56.000 14:33:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:56.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:18:56.000 00:18:56.000 --- 10.0.0.1 ping statistics --- 00:18:56.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.000 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:18:56.000 14:33:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.000 14:33:02 -- nvmf/common.sh@421 -- # return 0 00:18:56.000 14:33:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:56.000 14:33:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.000 14:33:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:56.000 14:33:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:56.000 14:33:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.000 14:33:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:56.000 14:33:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:56.000 14:33:02 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:56.000 14:33:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:56.001 14:33:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.001 14:33:02 -- common/autotest_common.sh@10 -- # set +x 00:18:56.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.001 14:33:02 -- nvmf/common.sh@469 -- # nvmfpid=74917 00:18:56.001 14:33:02 -- nvmf/common.sh@470 -- # waitforlisten 74917 00:18:56.001 14:33:02 -- common/autotest_common.sh@829 -- # '[' -z 74917 ']' 00:18:56.001 14:33:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.001 14:33:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.001 14:33:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.001 14:33:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.001 14:33:02 -- common/autotest_common.sh@10 -- # set +x 00:18:56.001 14:33:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.001 [2024-12-06 14:33:02.880695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:56.001 [2024-12-06 14:33:02.880800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.259 [2024-12-06 14:33:03.022218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.259 [2024-12-06 14:33:03.147893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:56.259 [2024-12-06 14:33:03.148463] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.259 [2024-12-06 14:33:03.148491] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.259 [2024-12-06 14:33:03.148503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.259 [2024-12-06 14:33:03.148545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.193 14:33:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.193 14:33:03 -- common/autotest_common.sh@862 -- # return 0 00:18:57.193 14:33:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:57.193 14:33:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.194 14:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 14:33:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.194 14:33:03 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.194 14:33:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.194 14:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 [2024-12-06 14:33:03.963401] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.194 14:33:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.194 14:33:03 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:57.194 14:33:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.194 14:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 Malloc0 00:18:57.194 14:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.194 14:33:04 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:57.194 14:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.194 14:33:04 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 14:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.194 14:33:04 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:57.194 14:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.194 14:33:04 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 14:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.194 14:33:04 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.194 14:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.194 14:33:04 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 [2024-12-06 14:33:04.027537] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.194 14:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.194 14:33:04 -- target/queue_depth.sh@30 -- # bdevperf_pid=74967 00:18:57.194 14:33:04 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:57.194 14:33:04 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.194 14:33:04 -- target/queue_depth.sh@33 -- # waitforlisten 74967 /var/tmp/bdevperf.sock 00:18:57.194 14:33:04 -- common/autotest_common.sh@829 -- # '[' -z 74967 ']' 00:18:57.194 14:33:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.194 14:33:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.194 14:33:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.194 14:33:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.194 14:33:04 -- common/autotest_common.sh@10 -- # set +x 00:18:57.194 [2024-12-06 14:33:04.082216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:57.194 [2024-12-06 14:33:04.082656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74967 ] 00:18:57.451 [2024-12-06 14:33:04.225225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.451 [2024-12-06 14:33:04.343052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.385 14:33:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.385 14:33:05 -- common/autotest_common.sh@862 -- # return 0 00:18:58.385 14:33:05 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:58.385 14:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.385 14:33:05 -- common/autotest_common.sh@10 -- # set +x 00:18:58.385 NVMe0n1 00:18:58.385 14:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.385 14:33:05 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.385 Running I/O for 10 seconds... 00:19:10.579 00:19:10.579 Latency(us) 00:19:10.579 [2024-12-06T14:33:17.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.579 [2024-12-06T14:33:17.549Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:10.579 Verification LBA range: start 0x0 length 0x4000 00:19:10.579 NVMe0n1 : 10.06 14240.65 55.63 0.00 0.00 71649.89 13702.98 61008.06 00:19:10.579 [2024-12-06T14:33:17.549Z] =================================================================================================================== 00:19:10.579 [2024-12-06T14:33:17.549Z] Total : 14240.65 55.63 0.00 0.00 71649.89 13702.98 61008.06 00:19:10.579 0 00:19:10.579 14:33:15 -- target/queue_depth.sh@39 -- # killprocess 74967 00:19:10.579 14:33:15 -- common/autotest_common.sh@936 -- # '[' -z 74967 ']' 00:19:10.579 14:33:15 -- common/autotest_common.sh@940 -- # kill -0 74967 00:19:10.579 14:33:15 -- common/autotest_common.sh@941 -- # uname 00:19:10.579 14:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:10.579 14:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74967 00:19:10.579 killing process with pid 74967 00:19:10.579 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.579 00:19:10.579 Latency(us) 00:19:10.579 [2024-12-06T14:33:17.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.579 [2024-12-06T14:33:17.549Z] =================================================================================================================== 00:19:10.579 [2024-12-06T14:33:17.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.579 14:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:10.579 14:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:10.579 14:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74967' 00:19:10.579 14:33:15 -- common/autotest_common.sh@955 -- # kill 74967 00:19:10.579 14:33:15 -- common/autotest_common.sh@960 -- # wait 74967 00:19:10.579 14:33:15 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:10.579 14:33:15 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:10.579 14:33:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:10.579 14:33:15 -- nvmf/common.sh@116 -- # sync 00:19:10.579 14:33:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:10.579 14:33:15 -- nvmf/common.sh@119 -- # set +e 00:19:10.579 14:33:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:10.580 14:33:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:10.580 rmmod nvme_tcp 00:19:10.580 rmmod nvme_fabrics 00:19:10.580 rmmod nvme_keyring 00:19:10.580 14:33:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:10.580 14:33:15 -- nvmf/common.sh@123 -- # set -e 00:19:10.580 14:33:15 -- nvmf/common.sh@124 -- # return 0 00:19:10.580 14:33:15 -- nvmf/common.sh@477 -- # '[' -n 74917 ']' 00:19:10.580 14:33:15 -- nvmf/common.sh@478 -- # killprocess 74917 00:19:10.580 14:33:15 -- common/autotest_common.sh@936 -- # '[' -z 74917 ']' 00:19:10.580 14:33:15 -- common/autotest_common.sh@940 -- # kill -0 74917 00:19:10.580 14:33:15 -- common/autotest_common.sh@941 -- # uname 00:19:10.580 14:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:10.580 14:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74917 00:19:10.580 killing process with pid 74917 00:19:10.580 14:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:10.580 14:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:10.580 14:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74917' 00:19:10.580 14:33:15 -- common/autotest_common.sh@955 -- # kill 74917 00:19:10.580 14:33:15 -- common/autotest_common.sh@960 -- # wait 74917 00:19:10.580 14:33:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:10.580 14:33:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:10.580 14:33:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:10.580 14:33:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.580 14:33:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:10.580 14:33:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.580 14:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.580 14:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.580 14:33:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:10.580 00:19:10.580 real 0m13.832s 00:19:10.580 user 0m23.435s 00:19:10.580 sys 0m2.329s 00:19:10.580 ************************************ 00:19:10.580 END TEST nvmf_queue_depth 00:19:10.580 ************************************ 00:19:10.580 14:33:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:10.580 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 14:33:16 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:10.580 14:33:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.580 14:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.580 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 ************************************ 00:19:10.580 START TEST nvmf_multipath 00:19:10.580 ************************************ 00:19:10.580 14:33:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:10.580 * Looking for test storage... 00:19:10.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:10.580 14:33:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:10.580 14:33:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:10.580 14:33:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:10.580 14:33:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:10.580 14:33:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:10.580 14:33:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:10.580 14:33:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:10.580 14:33:16 -- scripts/common.sh@335 -- # IFS=.-: 00:19:10.580 14:33:16 -- scripts/common.sh@335 -- # read -ra ver1 00:19:10.580 14:33:16 -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.580 14:33:16 -- scripts/common.sh@336 -- # read -ra ver2 00:19:10.580 14:33:16 -- scripts/common.sh@337 -- # local 'op=<' 00:19:10.580 14:33:16 -- scripts/common.sh@339 -- # ver1_l=2 00:19:10.580 14:33:16 -- scripts/common.sh@340 -- # ver2_l=1 00:19:10.580 14:33:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:10.580 14:33:16 -- scripts/common.sh@343 -- # case "$op" in 00:19:10.580 14:33:16 -- scripts/common.sh@344 -- # : 1 00:19:10.580 14:33:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:10.580 14:33:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.580 14:33:16 -- scripts/common.sh@364 -- # decimal 1 00:19:10.580 14:33:16 -- scripts/common.sh@352 -- # local d=1 00:19:10.580 14:33:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.580 14:33:16 -- scripts/common.sh@354 -- # echo 1 00:19:10.580 14:33:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:10.580 14:33:16 -- scripts/common.sh@365 -- # decimal 2 00:19:10.580 14:33:16 -- scripts/common.sh@352 -- # local d=2 00:19:10.580 14:33:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.580 14:33:16 -- scripts/common.sh@354 -- # echo 2 00:19:10.580 14:33:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:10.580 14:33:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:10.580 14:33:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:10.580 14:33:16 -- scripts/common.sh@367 -- # return 0 00:19:10.580 14:33:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.580 14:33:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.580 --rc genhtml_branch_coverage=1 00:19:10.580 --rc genhtml_function_coverage=1 00:19:10.580 --rc genhtml_legend=1 00:19:10.580 --rc geninfo_all_blocks=1 00:19:10.580 --rc geninfo_unexecuted_blocks=1 00:19:10.580 00:19:10.580 ' 00:19:10.580 14:33:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.580 --rc genhtml_branch_coverage=1 00:19:10.580 --rc genhtml_function_coverage=1 00:19:10.580 --rc genhtml_legend=1 00:19:10.580 --rc geninfo_all_blocks=1 00:19:10.580 --rc geninfo_unexecuted_blocks=1 00:19:10.580 00:19:10.580 ' 00:19:10.580 14:33:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.580 --rc genhtml_branch_coverage=1 00:19:10.580 --rc genhtml_function_coverage=1 00:19:10.580 --rc genhtml_legend=1 00:19:10.580 --rc geninfo_all_blocks=1 00:19:10.580 --rc geninfo_unexecuted_blocks=1 00:19:10.580 00:19:10.580 ' 00:19:10.580 14:33:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.580 --rc genhtml_branch_coverage=1 00:19:10.580 --rc genhtml_function_coverage=1 00:19:10.580 --rc genhtml_legend=1 00:19:10.580 --rc geninfo_all_blocks=1 00:19:10.580 --rc geninfo_unexecuted_blocks=1 00:19:10.580 00:19:10.580 ' 00:19:10.580 14:33:16 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.580 14:33:16 -- nvmf/common.sh@7 -- # uname -s 00:19:10.580 14:33:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.580 14:33:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.580 14:33:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.580 14:33:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.580 14:33:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.580 14:33:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.580 14:33:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.580 14:33:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.580 14:33:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.580 14:33:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.580 14:33:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:19:10.580 14:33:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:19:10.580 14:33:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.580 14:33:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.580 14:33:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.580 14:33:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.580 14:33:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.580 14:33:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.580 14:33:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.580 14:33:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.580 14:33:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.580 14:33:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.580 14:33:16 -- paths/export.sh@5 -- # export PATH 00:19:10.580 14:33:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.580 14:33:16 -- nvmf/common.sh@46 -- # : 0 00:19:10.580 14:33:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.580 14:33:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.580 14:33:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.580 14:33:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.581 14:33:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.581 14:33:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.581 14:33:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.581 14:33:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.581 14:33:16 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.581 14:33:16 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.581 14:33:16 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:10.581 14:33:16 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.581 14:33:16 -- target/multipath.sh@43 -- # nvmftestinit 00:19:10.581 14:33:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:10.581 14:33:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.581 14:33:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.581 14:33:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.581 14:33:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.581 14:33:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.581 14:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.581 14:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.581 14:33:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:10.581 14:33:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.581 14:33:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.581 14:33:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:10.581 14:33:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:10.581 14:33:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.581 14:33:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.581 14:33:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.581 14:33:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.581 14:33:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.581 14:33:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.581 14:33:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.581 14:33:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.581 14:33:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:10.581 14:33:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:10.581 Cannot find device "nvmf_tgt_br" 00:19:10.581 14:33:16 -- nvmf/common.sh@154 -- # true 00:19:10.581 14:33:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.581 Cannot find device "nvmf_tgt_br2" 00:19:10.581 14:33:16 -- nvmf/common.sh@155 -- # true 00:19:10.581 14:33:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:10.581 14:33:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:10.581 Cannot find device "nvmf_tgt_br" 00:19:10.581 14:33:16 -- nvmf/common.sh@157 -- # true 00:19:10.581 14:33:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:10.581 Cannot find device "nvmf_tgt_br2" 00:19:10.581 14:33:16 -- nvmf/common.sh@158 -- # true 00:19:10.581 14:33:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:10.581 14:33:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:10.581 14:33:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.581 14:33:16 -- nvmf/common.sh@161 -- # true 00:19:10.581 14:33:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.581 14:33:16 -- nvmf/common.sh@162 -- # true 00:19:10.581 14:33:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.581 14:33:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.581 14:33:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.581 14:33:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.581 14:33:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.581 14:33:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.581 14:33:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.581 14:33:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.581 14:33:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:10.581 14:33:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:10.581 14:33:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:10.581 14:33:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:10.581 14:33:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:10.581 14:33:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.581 14:33:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.581 14:33:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.581 14:33:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:10.581 14:33:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:10.581 14:33:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.581 14:33:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.581 14:33:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.581 14:33:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.581 14:33:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.581 14:33:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:10.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:10.581 00:19:10.581 --- 10.0.0.2 ping statistics --- 00:19:10.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.581 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:10.581 14:33:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:10.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:10.581 00:19:10.581 --- 10.0.0.3 ping statistics --- 00:19:10.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.581 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:10.581 14:33:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:10.581 00:19:10.581 --- 10.0.0.1 ping statistics --- 00:19:10.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.581 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:10.581 14:33:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.581 14:33:16 -- nvmf/common.sh@421 -- # return 0 00:19:10.581 14:33:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:10.581 14:33:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.581 14:33:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:10.581 14:33:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.581 14:33:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:10.581 14:33:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:10.581 14:33:16 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:19:10.581 14:33:16 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:19:10.581 14:33:16 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:19:10.581 14:33:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:10.581 14:33:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.581 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.581 14:33:16 -- nvmf/common.sh@469 -- # nvmfpid=75307 00:19:10.581 14:33:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.581 14:33:16 -- nvmf/common.sh@470 -- # waitforlisten 75307 00:19:10.581 14:33:16 -- common/autotest_common.sh@829 -- # '[' -z 75307 ']' 00:19:10.581 14:33:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.581 14:33:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.581 14:33:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.581 14:33:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.581 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.581 [2024-12-06 14:33:16.804745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:10.581 [2024-12-06 14:33:16.805074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.581 [2024-12-06 14:33:16.947577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.581 [2024-12-06 14:33:17.077013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:10.581 [2024-12-06 14:33:17.077592] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.581 [2024-12-06 14:33:17.077782] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.581 [2024-12-06 14:33:17.078008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.581 [2024-12-06 14:33:17.078324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.581 [2024-12-06 14:33:17.078441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.581 [2024-12-06 14:33:17.078553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.581 [2024-12-06 14:33:17.078558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.839 14:33:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.839 14:33:17 -- common/autotest_common.sh@862 -- # return 0 00:19:10.840 14:33:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:10.840 14:33:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.840 14:33:17 -- common/autotest_common.sh@10 -- # set +x 00:19:10.840 14:33:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.840 14:33:17 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:11.099 [2024-12-06 14:33:18.006514] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.099 14:33:18 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:11.357 Malloc0 00:19:11.615 14:33:18 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:19:11.873 14:33:18 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.132 14:33:18 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.132 [2024-12-06 14:33:19.098735] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.390 14:33:19 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.390 [2024-12-06 14:33:19.343015] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:12.649 14:33:19 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:19:12.649 14:33:19 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:19:12.907 14:33:19 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:19:12.907 14:33:19 -- common/autotest_common.sh@1187 -- # local i=0 00:19:12.907 14:33:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.907 14:33:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:12.907 14:33:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:15.455 14:33:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:15.455 14:33:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:15.455 14:33:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:15.455 14:33:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:15.455 14:33:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.455 14:33:21 -- common/autotest_common.sh@1197 -- # return 0 00:19:15.455 14:33:21 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:19:15.455 14:33:21 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:19:15.455 14:33:21 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:19:15.455 14:33:21 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:19:15.455 14:33:21 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:19:15.455 14:33:21 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:19:15.455 14:33:21 -- target/multipath.sh@38 -- # return 0 00:19:15.455 14:33:21 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:19:15.455 14:33:21 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:19:15.456 14:33:21 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:19:15.456 14:33:21 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:19:15.456 14:33:21 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:19:15.456 14:33:21 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:19:15.456 14:33:21 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:19:15.456 14:33:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:19:15.456 14:33:21 -- target/multipath.sh@22 -- # local timeout=20 00:19:15.456 14:33:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:15.456 14:33:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:15.456 14:33:21 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:15.456 14:33:21 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:19:15.456 14:33:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:19:15.456 14:33:21 -- target/multipath.sh@22 -- # local timeout=20 00:19:15.456 14:33:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:15.456 14:33:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:15.456 14:33:21 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:15.456 14:33:21 -- target/multipath.sh@85 -- # echo numa 00:19:15.456 14:33:21 -- target/multipath.sh@88 -- # fio_pid=75447 00:19:15.456 14:33:21 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:19:15.456 14:33:21 -- target/multipath.sh@90 -- # sleep 1 00:19:15.456 [global] 00:19:15.456 thread=1 00:19:15.456 invalidate=1 00:19:15.456 rw=randrw 00:19:15.456 time_based=1 00:19:15.456 runtime=6 00:19:15.456 ioengine=libaio 00:19:15.456 direct=1 00:19:15.456 bs=4096 00:19:15.456 iodepth=128 00:19:15.456 norandommap=0 00:19:15.456 numjobs=1 00:19:15.456 00:19:15.456 verify_dump=1 00:19:15.456 verify_backlog=512 00:19:15.456 verify_state_save=0 00:19:15.456 do_verify=1 00:19:15.456 verify=crc32c-intel 00:19:15.456 [job0] 00:19:15.456 filename=/dev/nvme0n1 00:19:15.456 Could not set queue depth (nvme0n1) 00:19:15.456 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.456 fio-3.35 00:19:15.456 Starting 1 thread 00:19:16.022 14:33:22 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:16.279 14:33:23 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:16.538 14:33:23 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:19:16.538 14:33:23 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:19:16.538 14:33:23 -- target/multipath.sh@22 -- # local timeout=20 00:19:16.538 14:33:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:16.538 14:33:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:16.538 14:33:23 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:16.538 14:33:23 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:19:16.538 14:33:23 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:19:16.538 14:33:23 -- target/multipath.sh@22 -- # local timeout=20 00:19:16.538 14:33:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:16.538 14:33:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:16.538 14:33:23 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:16.538 14:33:23 -- target/multipath.sh@25 -- # sleep 1s 00:19:17.472 14:33:24 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:17.472 14:33:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:17.472 14:33:24 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:17.472 14:33:24 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:18.039 14:33:24 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:18.039 14:33:24 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:19:18.039 14:33:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:19:18.039 14:33:24 -- target/multipath.sh@22 -- # local timeout=20 00:19:18.039 14:33:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:18.039 14:33:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:18.039 14:33:24 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:18.039 14:33:24 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:19:18.039 14:33:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:19:18.039 14:33:24 -- target/multipath.sh@22 -- # local timeout=20 00:19:18.039 14:33:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:18.039 14:33:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:18.039 14:33:24 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:18.039 14:33:24 -- target/multipath.sh@25 -- # sleep 1s 00:19:19.455 14:33:25 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:19.455 14:33:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:19.455 14:33:25 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:19.455 14:33:25 -- target/multipath.sh@104 -- # wait 75447 00:19:21.356 00:19:21.356 job0: (groupid=0, jobs=1): err= 0: pid=75478: Fri Dec 6 14:33:28 2024 00:19:21.356 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(257MiB/6006msec) 00:19:21.356 slat (usec): min=4, max=5909, avg=52.43, stdev=239.94 00:19:21.356 clat (usec): min=2049, max=15179, avg=8021.49, stdev=1280.51 00:19:21.356 lat (usec): min=2060, max=15187, avg=8073.92, stdev=1289.40 00:19:21.356 clat percentiles (usec): 00:19:21.356 | 1.00th=[ 4752], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7111], 00:19:21.356 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:19:21.356 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10159], 00:19:21.356 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13304], 99.95th=[13698], 00:19:21.356 | 99.99th=[14746] 00:19:21.356 bw ( KiB/s): min= 9680, max=27848, per=51.31%, avg=22502.73, stdev=5445.55, samples=11 00:19:21.356 iops : min= 2420, max= 6962, avg=5625.64, stdev=1361.42, samples=11 00:19:21.356 write: IOPS=6299, BW=24.6MiB/s (25.8MB/s)(132MiB/5374msec); 0 zone resets 00:19:21.356 slat (usec): min=11, max=3694, avg=65.12, stdev=168.86 00:19:21.356 clat (usec): min=1682, max=14423, avg=6904.93, stdev=1105.11 00:19:21.356 lat (usec): min=2470, max=14448, avg=6970.05, stdev=1109.25 00:19:21.356 clat percentiles (usec): 00:19:21.356 | 1.00th=[ 3556], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 6259], 00:19:21.356 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:19:21.356 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8356], 00:19:21.356 | 99.00th=[10290], 99.50th=[11076], 99.90th=[13173], 99.95th=[13304], 00:19:21.356 | 99.99th=[13960] 00:19:21.356 bw ( KiB/s): min= 9920, max=27496, per=89.34%, avg=22514.00, stdev=5192.44, samples=11 00:19:21.356 iops : min= 2480, max= 6874, avg=5628.45, stdev=1298.09, samples=11 00:19:21.356 lat (msec) : 2=0.01%, 4=0.92%, 10=94.82%, 20=4.26% 00:19:21.356 cpu : usr=5.00%, sys=22.20%, ctx=5876, majf=0, minf=114 00:19:21.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:21.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.356 issued rwts: total=65842,33855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.356 00:19:21.356 Run status group 0 (all jobs): 00:19:21.356 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6006-6006msec 00:19:21.356 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=132MiB (139MB), run=5374-5374msec 00:19:21.356 00:19:21.356 Disk stats (read/write): 00:19:21.356 nvme0n1: ios=64870/33180, merge=0/0, ticks=487749/214377, in_queue=702126, util=98.75% 00:19:21.356 14:33:28 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:21.614 14:33:28 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:21.872 14:33:28 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:19:21.872 14:33:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:19:21.872 14:33:28 -- target/multipath.sh@22 -- # local timeout=20 00:19:21.872 14:33:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:21.872 14:33:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:21.872 14:33:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:21.872 14:33:28 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:19:21.872 14:33:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:19:21.872 14:33:28 -- target/multipath.sh@22 -- # local timeout=20 00:19:21.872 14:33:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:21.872 14:33:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:21.872 14:33:28 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:19:21.872 14:33:28 -- target/multipath.sh@25 -- # sleep 1s 00:19:23.249 14:33:29 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:23.249 14:33:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:23.249 14:33:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:23.249 14:33:29 -- target/multipath.sh@113 -- # echo round-robin 00:19:23.249 14:33:29 -- target/multipath.sh@116 -- # fio_pid=75601 00:19:23.249 14:33:29 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:19:23.249 14:33:29 -- target/multipath.sh@118 -- # sleep 1 00:19:23.249 [global] 00:19:23.249 thread=1 00:19:23.249 invalidate=1 00:19:23.249 rw=randrw 00:19:23.249 time_based=1 00:19:23.249 runtime=6 00:19:23.249 ioengine=libaio 00:19:23.249 direct=1 00:19:23.249 bs=4096 00:19:23.249 iodepth=128 00:19:23.249 norandommap=0 00:19:23.249 numjobs=1 00:19:23.249 00:19:23.249 verify_dump=1 00:19:23.249 verify_backlog=512 00:19:23.249 verify_state_save=0 00:19:23.249 do_verify=1 00:19:23.249 verify=crc32c-intel 00:19:23.249 [job0] 00:19:23.249 filename=/dev/nvme0n1 00:19:23.249 Could not set queue depth (nvme0n1) 00:19:23.249 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.249 fio-3.35 00:19:23.249 Starting 1 thread 00:19:24.183 14:33:30 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:24.183 14:33:31 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:24.442 14:33:31 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:19:24.442 14:33:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:19:24.442 14:33:31 -- target/multipath.sh@22 -- # local timeout=20 00:19:24.442 14:33:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:24.442 14:33:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:24.442 14:33:31 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:24.442 14:33:31 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:19:24.442 14:33:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:19:24.442 14:33:31 -- target/multipath.sh@22 -- # local timeout=20 00:19:24.442 14:33:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:24.442 14:33:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:24.442 14:33:31 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:24.442 14:33:31 -- target/multipath.sh@25 -- # sleep 1s 00:19:25.837 14:33:32 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:25.837 14:33:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:25.837 14:33:32 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:25.837 14:33:32 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:25.837 14:33:32 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:26.094 14:33:32 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:19:26.094 14:33:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:19:26.094 14:33:32 -- target/multipath.sh@22 -- # local timeout=20 00:19:26.094 14:33:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:26.094 14:33:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:26.094 14:33:32 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:26.094 14:33:32 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:19:26.094 14:33:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:19:26.094 14:33:32 -- target/multipath.sh@22 -- # local timeout=20 00:19:26.094 14:33:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:26.094 14:33:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:26.094 14:33:32 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:26.095 14:33:32 -- target/multipath.sh@25 -- # sleep 1s 00:19:27.028 14:33:33 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:27.028 14:33:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:27.028 14:33:33 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:27.028 14:33:33 -- target/multipath.sh@132 -- # wait 75601 00:19:29.558 00:19:29.558 job0: (groupid=0, jobs=1): err= 0: pid=75622: Fri Dec 6 14:33:36 2024 00:19:29.558 read: IOPS=11.2k, BW=43.7MiB/s (45.9MB/s)(263MiB/6006msec) 00:19:29.558 slat (usec): min=3, max=5411, avg=44.22, stdev=214.42 00:19:29.558 clat (usec): min=973, max=18062, avg=7843.06, stdev=1510.54 00:19:29.558 lat (usec): min=984, max=18072, avg=7887.28, stdev=1519.48 00:19:29.558 clat percentiles (usec): 00:19:29.558 | 1.00th=[ 3654], 5.00th=[ 5276], 10.00th=[ 6194], 20.00th=[ 6915], 00:19:29.558 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8160], 00:19:29.558 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10159], 00:19:29.558 | 99.00th=[12125], 99.50th=[12518], 99.90th=[14091], 99.95th=[15139], 00:19:29.558 | 99.99th=[17695] 00:19:29.558 bw ( KiB/s): min= 6976, max=35736, per=52.64%, avg=23581.09, stdev=8128.93, samples=11 00:19:29.558 iops : min= 1744, max= 8934, avg=5895.45, stdev=2032.32, samples=11 00:19:29.558 write: IOPS=6745, BW=26.3MiB/s (27.6MB/s)(140MiB/5303msec); 0 zone resets 00:19:29.558 slat (usec): min=11, max=2122, avg=56.10, stdev=143.10 00:19:29.558 clat (usec): min=854, max=14908, avg=6569.94, stdev=1365.51 00:19:29.558 lat (usec): min=898, max=14932, avg=6626.03, stdev=1374.61 00:19:29.558 clat percentiles (usec): 00:19:29.558 | 1.00th=[ 3097], 5.00th=[ 3916], 10.00th=[ 4490], 20.00th=[ 5538], 00:19:29.558 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7046], 00:19:29.558 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:19:29.558 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12125], 99.95th=[12518], 00:19:29.558 | 99.99th=[13173] 00:19:29.558 bw ( KiB/s): min= 7536, max=35232, per=87.54%, avg=23619.64, stdev=7713.79, samples=11 00:19:29.558 iops : min= 1884, max= 8808, avg=5904.91, stdev=1928.45, samples=11 00:19:29.558 lat (usec) : 1000=0.01% 00:19:29.558 lat (msec) : 2=0.07%, 4=2.88%, 10=92.82%, 20=4.22% 00:19:29.558 cpu : usr=5.43%, sys=22.93%, ctx=6405, majf=0, minf=114 00:19:29.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:29.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.558 issued rwts: total=67265,35769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.558 00:19:29.558 Run status group 0 (all jobs): 00:19:29.558 READ: bw=43.7MiB/s (45.9MB/s), 43.7MiB/s-43.7MiB/s (45.9MB/s-45.9MB/s), io=263MiB (276MB), run=6006-6006msec 00:19:29.558 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=140MiB (147MB), run=5303-5303msec 00:19:29.558 00:19:29.558 Disk stats (read/write): 00:19:29.558 nvme0n1: ios=66612/34883, merge=0/0, ticks=488331/213164, in_queue=701495, util=98.68% 00:19:29.558 14:33:36 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:29.558 14:33:36 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:29.558 14:33:36 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.558 14:33:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.558 14:33:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:29.558 14:33:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.558 14:33:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:29.558 14:33:36 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.558 14:33:36 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.817 14:33:36 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:19:29.817 14:33:36 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:19:29.817 14:33:36 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:19:29.817 14:33:36 -- target/multipath.sh@144 -- # nvmftestfini 00:19:29.817 14:33:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:29.817 14:33:36 -- nvmf/common.sh@116 -- # sync 00:19:29.817 14:33:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:29.817 14:33:36 -- nvmf/common.sh@119 -- # set +e 00:19:29.817 14:33:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:29.817 14:33:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:29.817 rmmod nvme_tcp 00:19:29.817 rmmod nvme_fabrics 00:19:29.817 rmmod nvme_keyring 00:19:29.817 14:33:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:29.817 14:33:36 -- nvmf/common.sh@123 -- # set -e 00:19:29.817 14:33:36 -- nvmf/common.sh@124 -- # return 0 00:19:29.817 14:33:36 -- nvmf/common.sh@477 -- # '[' -n 75307 ']' 00:19:29.817 14:33:36 -- nvmf/common.sh@478 -- # killprocess 75307 00:19:29.817 14:33:36 -- common/autotest_common.sh@936 -- # '[' -z 75307 ']' 00:19:29.817 14:33:36 -- common/autotest_common.sh@940 -- # kill -0 75307 00:19:29.817 14:33:36 -- common/autotest_common.sh@941 -- # uname 00:19:29.817 14:33:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:29.817 14:33:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75307 00:19:30.075 killing process with pid 75307 00:19:30.075 14:33:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.075 14:33:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.075 14:33:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75307' 00:19:30.075 14:33:36 -- common/autotest_common.sh@955 -- # kill 75307 00:19:30.075 14:33:36 -- common/autotest_common.sh@960 -- # wait 75307 00:19:30.335 14:33:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:30.335 14:33:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:30.335 14:33:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:30.335 14:33:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.335 14:33:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:30.335 14:33:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.335 14:33:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.335 14:33:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.335 14:33:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:30.335 ************************************ 00:19:30.335 END TEST nvmf_multipath 00:19:30.335 ************************************ 00:19:30.335 00:19:30.335 real 0m21.080s 00:19:30.335 user 1m22.011s 00:19:30.335 sys 0m6.480s 00:19:30.335 14:33:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:30.335 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.594 14:33:37 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:30.594 14:33:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.594 14:33:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.594 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.594 ************************************ 00:19:30.594 START TEST nvmf_zcopy 00:19:30.594 ************************************ 00:19:30.594 14:33:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:30.594 * Looking for test storage... 00:19:30.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.594 14:33:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:30.594 14:33:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:30.594 14:33:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:30.594 14:33:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:30.594 14:33:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:30.594 14:33:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:30.594 14:33:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:30.594 14:33:37 -- scripts/common.sh@335 -- # IFS=.-: 00:19:30.594 14:33:37 -- scripts/common.sh@335 -- # read -ra ver1 00:19:30.594 14:33:37 -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.594 14:33:37 -- scripts/common.sh@336 -- # read -ra ver2 00:19:30.594 14:33:37 -- scripts/common.sh@337 -- # local 'op=<' 00:19:30.594 14:33:37 -- scripts/common.sh@339 -- # ver1_l=2 00:19:30.594 14:33:37 -- scripts/common.sh@340 -- # ver2_l=1 00:19:30.594 14:33:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:30.594 14:33:37 -- scripts/common.sh@343 -- # case "$op" in 00:19:30.594 14:33:37 -- scripts/common.sh@344 -- # : 1 00:19:30.594 14:33:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:30.594 14:33:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.594 14:33:37 -- scripts/common.sh@364 -- # decimal 1 00:19:30.594 14:33:37 -- scripts/common.sh@352 -- # local d=1 00:19:30.594 14:33:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.594 14:33:37 -- scripts/common.sh@354 -- # echo 1 00:19:30.594 14:33:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:30.594 14:33:37 -- scripts/common.sh@365 -- # decimal 2 00:19:30.594 14:33:37 -- scripts/common.sh@352 -- # local d=2 00:19:30.594 14:33:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.594 14:33:37 -- scripts/common.sh@354 -- # echo 2 00:19:30.594 14:33:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:30.594 14:33:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:30.594 14:33:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:30.594 14:33:37 -- scripts/common.sh@367 -- # return 0 00:19:30.594 14:33:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.594 14:33:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:30.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.594 --rc genhtml_branch_coverage=1 00:19:30.594 --rc genhtml_function_coverage=1 00:19:30.594 --rc genhtml_legend=1 00:19:30.594 --rc geninfo_all_blocks=1 00:19:30.594 --rc geninfo_unexecuted_blocks=1 00:19:30.594 00:19:30.594 ' 00:19:30.594 14:33:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:30.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.594 --rc genhtml_branch_coverage=1 00:19:30.594 --rc genhtml_function_coverage=1 00:19:30.594 --rc genhtml_legend=1 00:19:30.594 --rc geninfo_all_blocks=1 00:19:30.594 --rc geninfo_unexecuted_blocks=1 00:19:30.594 00:19:30.594 ' 00:19:30.594 14:33:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:30.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.594 --rc genhtml_branch_coverage=1 00:19:30.594 --rc genhtml_function_coverage=1 00:19:30.594 --rc genhtml_legend=1 00:19:30.594 --rc geninfo_all_blocks=1 00:19:30.594 --rc geninfo_unexecuted_blocks=1 00:19:30.594 00:19:30.594 ' 00:19:30.594 14:33:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:30.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.594 --rc genhtml_branch_coverage=1 00:19:30.594 --rc genhtml_function_coverage=1 00:19:30.594 --rc genhtml_legend=1 00:19:30.594 --rc geninfo_all_blocks=1 00:19:30.594 --rc geninfo_unexecuted_blocks=1 00:19:30.594 00:19:30.594 ' 00:19:30.594 14:33:37 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.594 14:33:37 -- nvmf/common.sh@7 -- # uname -s 00:19:30.594 14:33:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.594 14:33:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.594 14:33:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.594 14:33:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.594 14:33:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.594 14:33:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.594 14:33:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.594 14:33:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.594 14:33:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.594 14:33:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.594 14:33:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:19:30.594 14:33:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:19:30.594 14:33:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.594 14:33:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.594 14:33:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.594 14:33:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.594 14:33:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.594 14:33:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.594 14:33:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.594 14:33:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.595 14:33:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.595 14:33:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.595 14:33:37 -- paths/export.sh@5 -- # export PATH 00:19:30.595 14:33:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.595 14:33:37 -- nvmf/common.sh@46 -- # : 0 00:19:30.595 14:33:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.595 14:33:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.595 14:33:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.595 14:33:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.595 14:33:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.595 14:33:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.595 14:33:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.595 14:33:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.595 14:33:37 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:30.595 14:33:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.595 14:33:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.595 14:33:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.595 14:33:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.595 14:33:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.595 14:33:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.595 14:33:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.595 14:33:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.854 14:33:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:30.854 14:33:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:30.854 14:33:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:30.854 14:33:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:30.854 14:33:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:30.854 14:33:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:30.854 14:33:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.854 14:33:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.854 14:33:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.854 14:33:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:30.854 14:33:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.854 14:33:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.854 14:33:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.854 14:33:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.854 14:33:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.854 14:33:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.854 14:33:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.854 14:33:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.854 14:33:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:30.854 14:33:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:30.854 Cannot find device "nvmf_tgt_br" 00:19:30.854 14:33:37 -- nvmf/common.sh@154 -- # true 00:19:30.854 14:33:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.854 Cannot find device "nvmf_tgt_br2" 00:19:30.854 14:33:37 -- nvmf/common.sh@155 -- # true 00:19:30.854 14:33:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:30.854 14:33:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:30.854 Cannot find device "nvmf_tgt_br" 00:19:30.854 14:33:37 -- nvmf/common.sh@157 -- # true 00:19:30.854 14:33:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:30.854 Cannot find device "nvmf_tgt_br2" 00:19:30.854 14:33:37 -- nvmf/common.sh@158 -- # true 00:19:30.854 14:33:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:30.854 14:33:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:30.854 14:33:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.854 14:33:37 -- nvmf/common.sh@161 -- # true 00:19:30.854 14:33:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.854 14:33:37 -- nvmf/common.sh@162 -- # true 00:19:30.854 14:33:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.854 14:33:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.854 14:33:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.854 14:33:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.854 14:33:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.854 14:33:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.854 14:33:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.854 14:33:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.854 14:33:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.854 14:33:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:30.854 14:33:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:31.114 14:33:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:31.114 14:33:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:31.114 14:33:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.114 14:33:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.114 14:33:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.114 14:33:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:31.114 14:33:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:31.114 14:33:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.114 14:33:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.114 14:33:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.114 14:33:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.114 14:33:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.114 14:33:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:31.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:19:31.114 00:19:31.114 --- 10.0.0.2 ping statistics --- 00:19:31.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.114 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:31.114 14:33:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:31.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:31.114 00:19:31.114 --- 10.0.0.3 ping statistics --- 00:19:31.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.114 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:31.114 14:33:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:31.114 00:19:31.114 --- 10.0.0.1 ping statistics --- 00:19:31.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.114 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:31.114 14:33:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.114 14:33:37 -- nvmf/common.sh@421 -- # return 0 00:19:31.114 14:33:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:31.114 14:33:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.114 14:33:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:31.114 14:33:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:31.114 14:33:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.114 14:33:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:31.114 14:33:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:31.114 14:33:37 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:31.114 14:33:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:31.114 14:33:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.114 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.114 14:33:37 -- nvmf/common.sh@469 -- # nvmfpid=75924 00:19:31.114 14:33:37 -- nvmf/common.sh@470 -- # waitforlisten 75924 00:19:31.114 14:33:37 -- common/autotest_common.sh@829 -- # '[' -z 75924 ']' 00:19:31.114 14:33:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.114 14:33:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.114 14:33:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.114 14:33:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.114 14:33:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.114 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.114 [2024-12-06 14:33:38.013798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:31.114 [2024-12-06 14:33:38.013952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.373 [2024-12-06 14:33:38.158020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.373 [2024-12-06 14:33:38.323600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:31.373 [2024-12-06 14:33:38.323794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.373 [2024-12-06 14:33:38.323807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.373 [2024-12-06 14:33:38.323816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.373 [2024-12-06 14:33:38.323851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.309 14:33:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.309 14:33:39 -- common/autotest_common.sh@862 -- # return 0 00:19:32.309 14:33:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:32.309 14:33:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 14:33:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.309 14:33:39 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:32.309 14:33:39 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:32.309 14:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 [2024-12-06 14:33:39.122205] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.309 14:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.309 14:33:39 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:32.309 14:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 14:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.309 14:33:39 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.309 14:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 [2024-12-06 14:33:39.138392] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.309 14:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.309 14:33:39 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:32.309 14:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 14:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.309 14:33:39 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:32.309 14:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 malloc0 00:19:32.309 14:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.309 14:33:39 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:32.309 14:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.309 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 14:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.309 14:33:39 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:32.309 14:33:39 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:32.309 14:33:39 -- nvmf/common.sh@520 -- # config=() 00:19:32.309 14:33:39 -- nvmf/common.sh@520 -- # local subsystem config 00:19:32.309 14:33:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:32.309 14:33:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:32.309 { 00:19:32.309 "params": { 00:19:32.309 "name": "Nvme$subsystem", 00:19:32.309 "trtype": "$TEST_TRANSPORT", 00:19:32.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.309 "adrfam": "ipv4", 00:19:32.309 "trsvcid": "$NVMF_PORT", 00:19:32.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.309 "hdgst": ${hdgst:-false}, 00:19:32.309 "ddgst": ${ddgst:-false} 00:19:32.309 }, 00:19:32.309 "method": "bdev_nvme_attach_controller" 00:19:32.309 } 00:19:32.309 EOF 00:19:32.309 )") 00:19:32.309 14:33:39 -- nvmf/common.sh@542 -- # cat 00:19:32.309 14:33:39 -- nvmf/common.sh@544 -- # jq . 00:19:32.309 14:33:39 -- nvmf/common.sh@545 -- # IFS=, 00:19:32.309 14:33:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:32.309 "params": { 00:19:32.309 "name": "Nvme1", 00:19:32.309 "trtype": "tcp", 00:19:32.309 "traddr": "10.0.0.2", 00:19:32.309 "adrfam": "ipv4", 00:19:32.309 "trsvcid": "4420", 00:19:32.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.309 "hdgst": false, 00:19:32.309 "ddgst": false 00:19:32.309 }, 00:19:32.309 "method": "bdev_nvme_attach_controller" 00:19:32.309 }' 00:19:32.309 [2024-12-06 14:33:39.259664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:32.309 [2024-12-06 14:33:39.259816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75975 ] 00:19:32.568 [2024-12-06 14:33:39.402123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.827 [2024-12-06 14:33:39.558476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.827 Running I/O for 10 seconds... 00:19:45.026 00:19:45.026 Latency(us) 00:19:45.026 [2024-12-06T14:33:51.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.026 [2024-12-06T14:33:51.996Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:45.026 Verification LBA range: start 0x0 length 0x1000 00:19:45.026 Nvme1n1 : 10.01 8703.85 68.00 0.00 0.00 14669.61 1638.40 30146.56 00:19:45.026 [2024-12-06T14:33:51.996Z] =================================================================================================================== 00:19:45.026 [2024-12-06T14:33:51.996Z] Total : 8703.85 68.00 0.00 0.00 14669.61 1638.40 30146.56 00:19:45.026 14:33:50 -- target/zcopy.sh@39 -- # perfpid=76098 00:19:45.026 14:33:50 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:45.026 14:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:45.026 14:33:50 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:45.026 14:33:50 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:45.026 14:33:50 -- nvmf/common.sh@520 -- # config=() 00:19:45.026 14:33:50 -- nvmf/common.sh@520 -- # local subsystem config 00:19:45.026 14:33:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:45.026 14:33:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:45.026 { 00:19:45.026 "params": { 00:19:45.026 "name": "Nvme$subsystem", 00:19:45.026 "trtype": "$TEST_TRANSPORT", 00:19:45.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.026 "adrfam": "ipv4", 00:19:45.026 "trsvcid": "$NVMF_PORT", 00:19:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.026 "hdgst": ${hdgst:-false}, 00:19:45.026 "ddgst": ${ddgst:-false} 00:19:45.026 }, 00:19:45.026 "method": "bdev_nvme_attach_controller" 00:19:45.026 } 00:19:45.026 EOF 00:19:45.026 )") 00:19:45.026 14:33:50 -- nvmf/common.sh@542 -- # cat 00:19:45.026 [2024-12-06 14:33:50.197840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.197899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 14:33:50 -- nvmf/common.sh@544 -- # jq . 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 14:33:50 -- nvmf/common.sh@545 -- # IFS=, 00:19:45.026 14:33:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:45.026 "params": { 00:19:45.026 "name": "Nvme1", 00:19:45.026 "trtype": "tcp", 00:19:45.026 "traddr": "10.0.0.2", 00:19:45.026 "adrfam": "ipv4", 00:19:45.026 "trsvcid": "4420", 00:19:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.026 "hdgst": false, 00:19:45.026 "ddgst": false 00:19:45.026 }, 00:19:45.026 "method": "bdev_nvme_attach_controller" 00:19:45.026 }' 00:19:45.026 [2024-12-06 14:33:50.209801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.209859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.221824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.221870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.233824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.233861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.245828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.245861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.252518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:45.026 [2024-12-06 14:33:50.252649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76098 ] 00:19:45.026 [2024-12-06 14:33:50.257785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.257826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.269794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.269841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.281796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.281830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.026 [2024-12-06 14:33:50.293822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.026 [2024-12-06 14:33:50.293870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.026 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.305862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.305897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.317848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.317882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.329816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.329864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.341866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.341901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.354077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.354112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.366075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.366109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.378078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.378117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.390091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.390133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 [2024-12-06 14:33:50.392192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.402090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.402123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.414088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.414131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.426089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.426131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.438090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.438130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.450114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.450146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.462125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.462172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.474103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.474154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.486093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.486128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.498107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.498136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.510108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.510137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.522136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.522168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.534133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.534171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.546131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.546166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 [2024-12-06 14:33:50.549644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.558121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.558157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.570136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.570182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.582137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.582170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.027 [2024-12-06 14:33:50.594130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.027 [2024-12-06 14:33:50.594172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.027 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.606134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.606167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.618149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.618180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.630139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.630170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.642142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.642171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.654149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.654192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.666172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.666203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.678153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.678183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.690157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.690187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.702161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.702191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.714238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.714280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.726180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.726222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.738188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.738233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.750191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.750227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.762191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.762225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.774204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.774253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 Running I/O for 5 seconds... 00:19:45.028 [2024-12-06 14:33:50.786201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.786235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.803698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.803747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.822269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.822320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.836134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.836185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.851332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.851382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.861774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.861845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.877089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.877169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.892149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.892188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.902985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.903039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.918074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.028 [2024-12-06 14:33:50.918112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.028 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.028 [2024-12-06 14:33:50.933273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:50.933321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:50.948059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:50.948098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:50.962932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:50.962970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:50.979196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:50.979246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:50.994738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:50.994776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.010410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.010471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.029350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.029452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.044309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.044348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.060685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.060738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.076941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.076979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.094070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.094132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.110681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.110741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.125948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.125987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.141606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.141648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.160891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.160939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.177683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.177740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.194207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.194247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.209934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.209972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.225679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.225745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.241272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.241324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.259772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.259810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.274271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.274311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.291542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.291579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.308707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.308746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.029 [2024-12-06 14:33:51.323642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.029 [2024-12-06 14:33:51.323680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.029 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.339330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.339371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.356878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.356917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.372724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.372786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.389061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.389101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.404200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.404238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.421478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.421515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.436973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.437009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.454448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.454518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.469022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.469063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.484546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.484583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.503131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.503187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.519585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.519625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.535548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.535588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.550769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.550824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.566438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.566491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.576657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.576694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.592423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.592477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.609180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.609247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.625809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.625874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.641863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.641902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.658067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.658113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.674895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.674957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.690520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.690568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.030 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.030 [2024-12-06 14:33:51.706047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.030 [2024-12-06 14:33:51.706093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.717167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.717206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.732756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.732796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.749075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.749114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.764233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.764273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.779531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.779581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.796238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.796313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.811738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.811778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.826912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.826954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.836902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.836940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.853141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.853179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.868223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.868263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.884449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.884500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.901902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.901942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.917986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.918026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.935544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.935583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.952072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.952111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.967617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.967656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.031 [2024-12-06 14:33:51.986507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.031 [2024-12-06 14:33:51.986546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.031 2024/12/06 14:33:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.290 [2024-12-06 14:33:52.002319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.290 [2024-12-06 14:33:52.002357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.290 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.290 [2024-12-06 14:33:52.018054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.290 [2024-12-06 14:33:52.018094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.290 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.290 [2024-12-06 14:33:52.033404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.290 [2024-12-06 14:33:52.033475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.050452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.050504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.066998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.067035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.084849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.084900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.099003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.099041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.114219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.114257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.130191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.130256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.146488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.146527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.163289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.163326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.178402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.178454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.194304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.194342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.211000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.211052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.226415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.226504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.291 [2024-12-06 14:33:52.243280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.291 [2024-12-06 14:33:52.243320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.291 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.259921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.259961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.276502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.276559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.293210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.293249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.309360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.309453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.326339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.326377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.343215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.343262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.359778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.359818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.376304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.376346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.391931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.391974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.550 [2024-12-06 14:33:52.408733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.550 [2024-12-06 14:33:52.408778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.550 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.551 [2024-12-06 14:33:52.425552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.551 [2024-12-06 14:33:52.425594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.551 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.551 [2024-12-06 14:33:52.443872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.551 [2024-12-06 14:33:52.443912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.551 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.551 [2024-12-06 14:33:52.459467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.551 [2024-12-06 14:33:52.459520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.551 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.551 [2024-12-06 14:33:52.475710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.551 [2024-12-06 14:33:52.475749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.551 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.551 [2024-12-06 14:33:52.492609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.551 [2024-12-06 14:33:52.492648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.551 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.551 [2024-12-06 14:33:52.509224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.551 [2024-12-06 14:33:52.509297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.551 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.524857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.524892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.536021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.536061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.547963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.548000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.562679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.562729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.578784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.578821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.594642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.594693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.611862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.611899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.626278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.626315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.640961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.640999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.655554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.655594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.671507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.671549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.687312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.687350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.698916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.698957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.713877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.713916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.730068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.730107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.741139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.741177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.756249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.756316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:45.810 [2024-12-06 14:33:52.771323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.810 [2024-12-06 14:33:52.771369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.810 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.782193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.782232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.797414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.797469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.812590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.812631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.827938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.827977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.839110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.839158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.854567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.854619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.869864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.869904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.880776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.069 [2024-12-06 14:33:52.880814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.069 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.069 [2024-12-06 14:33:52.895932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.895970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:52.911195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.911233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:52.922149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.922187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:52.937053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.937090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:52.953061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.953098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:52.970635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.970687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:52.986518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:52.986565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:53.003845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:53.003889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:53.019306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:53.019347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.070 [2024-12-06 14:33:53.030575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.070 [2024-12-06 14:33:53.030629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.070 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.046107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.046146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.060565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.060608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.077043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.077084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.092493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.092532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.103602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.103656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.119308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.119372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.135734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.135776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.151776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.151821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.168098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.168155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.178875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.178912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.193630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.193700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.209546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.209628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.226989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.227031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.329 [2024-12-06 14:33:53.241895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.329 [2024-12-06 14:33:53.241933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.329 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.330 [2024-12-06 14:33:53.257727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.330 [2024-12-06 14:33:53.257782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.330 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.330 [2024-12-06 14:33:53.274114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.330 [2024-12-06 14:33:53.274153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.330 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.330 [2024-12-06 14:33:53.291728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.330 [2024-12-06 14:33:53.291770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.330 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.307238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.307277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.319376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.319454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.335160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.335199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.350893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.350934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.368000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.368075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.384446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.384519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.401844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.402059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.417294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.417619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.434236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.434297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.450689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.450731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.466380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.466470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.476969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.477186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.492397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.492619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.508332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.508558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.520090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.590 [2024-12-06 14:33:53.520298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.590 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.590 [2024-12-06 14:33:53.535162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.591 [2024-12-06 14:33:53.535394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.591 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.591 [2024-12-06 14:33:53.552374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.591 [2024-12-06 14:33:53.552453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.591 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.849 [2024-12-06 14:33:53.568251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.849 [2024-12-06 14:33:53.568310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.849 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.849 [2024-12-06 14:33:53.578224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.849 [2024-12-06 14:33:53.578263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.849 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.849 [2024-12-06 14:33:53.594398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.849 [2024-12-06 14:33:53.594468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.849 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.849 [2024-12-06 14:33:53.610109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.849 [2024-12-06 14:33:53.610150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.849 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.849 [2024-12-06 14:33:53.626744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.849 [2024-12-06 14:33:53.626784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.642782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.642823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.654180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.654396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.669207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.669246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.685754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.685793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.701689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.701761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.719897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.719935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.734548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.734587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.745179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.745433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.760171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.760210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.778409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.778495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.793892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.793932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:46.850 [2024-12-06 14:33:53.808859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.850 [2024-12-06 14:33:53.809054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.850 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.824175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.824403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.840416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.840467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.857996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.858037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.873032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.873077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.888308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.888527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.899145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.899201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.915134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.915186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.930821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.930875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.947659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.947715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.963689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.963765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.980488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.980543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:53.997370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:53.997451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:54.013499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:54.013540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.111 [2024-12-06 14:33:54.030513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.111 [2024-12-06 14:33:54.030569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.111 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.112 [2024-12-06 14:33:54.046473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.112 [2024-12-06 14:33:54.046532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.112 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.112 [2024-12-06 14:33:54.063215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.112 [2024-12-06 14:33:54.063280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.112 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.080826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.080888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.095264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.095321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.110731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.110791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.120913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.120966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.136889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.136944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.151951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.152007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.167424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.167491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.177845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.177920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.193438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.193488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.209634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.209676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.228494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.228533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.242858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.242913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.258763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.258836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.277271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.277343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.292222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.292295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.308743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.308799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.371 [2024-12-06 14:33:54.326578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.371 [2024-12-06 14:33:54.326661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.371 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.342374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.342448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.649 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.353480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.353544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.649 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.368123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.368177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.649 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.382151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.382206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.649 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.398092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.398146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.649 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.408896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.408950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.649 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.649 [2024-12-06 14:33:54.423321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.649 [2024-12-06 14:33:54.423376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.439051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.439111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.450541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.450600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.465483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.465525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.482206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.482262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.497977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.498032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.516394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.516478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.531783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.531880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.548259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.548331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.567595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.567637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.583269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.583322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.600042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.600096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.650 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.650 [2024-12-06 14:33:54.616065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.650 [2024-12-06 14:33:54.616129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.632015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.632072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.647550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.647619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.658124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.658187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.673268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.673324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.688531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.688585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.704123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.704178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.714604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.714677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.729198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.729253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.746107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.746146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.762252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.762296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.780718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.780784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.796095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.796156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.812590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.812666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.829098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.829141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.845433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.910 [2024-12-06 14:33:54.845471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.910 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:47.910 [2024-12-06 14:33:54.865347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.911 [2024-12-06 14:33:54.865471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.911 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.880174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.880232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.890201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.890262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.906358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.906440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.921912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.921967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.939229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.939286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.955041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.955080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.971413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.971497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:54.987702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:54.987759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:55.005092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:55.005149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:55.021120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:55.021187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:55.036937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:55.037008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:55.052677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:55.052733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:55.069233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:55.069288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.171 [2024-12-06 14:33:55.084973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.171 [2024-12-06 14:33:55.085032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.171 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.172 [2024-12-06 14:33:55.096344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.172 [2024-12-06 14:33:55.096436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.172 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.172 [2024-12-06 14:33:55.112099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.172 [2024-12-06 14:33:55.112174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.172 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.172 [2024-12-06 14:33:55.128437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.172 [2024-12-06 14:33:55.128505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.172 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.146171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.146214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.162788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.162831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.179049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.179104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.195815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.195872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.211323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.211388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.226877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.226950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.237871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.237927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.431 [2024-12-06 14:33:55.253351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.431 [2024-12-06 14:33:55.253457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.431 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.269332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.269440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.286931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.286987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.303037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.303094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.319231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.319288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.337824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.337894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.354276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.354330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.371160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.371209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.432 [2024-12-06 14:33:55.387929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.432 [2024-12-06 14:33:55.388003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.432 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.402626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.402681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.418672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.418710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.436376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.436445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.451077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.451131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.461514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.461566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.477295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.477349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.493070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.493144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.503430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.503469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.519759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.519800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.534759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.534802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.544951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.545022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.559900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.559963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.570181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.570236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.585723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.585781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.602317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.602390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.618382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.618510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.635981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.636036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.691 [2024-12-06 14:33:55.652682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.691 [2024-12-06 14:33:55.652749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.691 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.667817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.667871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.685752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.685808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.701542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.701582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.718182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.718261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.734039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.734095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.750515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.750570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.764560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.764615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.779473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.779510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 00:19:48.950 Latency(us) 00:19:48.950 [2024-12-06T14:33:55.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.950 [2024-12-06T14:33:55.920Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:48.950 Nvme1n1 : 5.01 10873.22 84.95 0.00 0.00 11757.29 4349.21 20852.36 00:19:48.950 [2024-12-06T14:33:55.920Z] =================================================================================================================== 00:19:48.950 [2024-12-06T14:33:55.920Z] Total : 10873.22 84.95 0.00 0.00 11757.29 4349.21 20852.36 00:19:48.950 [2024-12-06 14:33:55.793177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.793229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.802960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.803011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.950 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.950 [2024-12-06 14:33:55.814942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.950 [2024-12-06 14:33:55.815004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.826934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.826978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.838951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.838995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.850954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.850997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.862957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.863001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.874958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.875002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.886961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.887005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.898961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.899004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:48.951 [2024-12-06 14:33:55.910963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:48.951 [2024-12-06 14:33:55.911006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.951 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.923002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.923046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.210 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.934970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.935012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.210 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.946979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.947026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.210 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.958983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.959036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.210 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.970986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.971018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.210 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.982991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.983039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.210 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.210 [2024-12-06 14:33:55.994992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.210 [2024-12-06 14:33:55.995039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.006998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.007044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.019006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.019056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.031010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.031060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.043033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.043091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.055011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.055059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.067012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.067061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.079015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.079064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.091015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.091064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.103018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.103067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.115021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.115071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.127032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.127082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.139047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.139091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.151063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.151106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.163051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.163087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.211 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.211 [2024-12-06 14:33:56.175076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.211 [2024-12-06 14:33:56.175138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.470 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.470 [2024-12-06 14:33:56.187042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:49.470 [2024-12-06 14:33:56.187091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:49.470 2024/12/06 14:33:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:49.470 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76098) - No such process 00:19:49.470 14:33:56 -- target/zcopy.sh@49 -- # wait 76098 00:19:49.470 14:33:56 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.470 14:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.470 14:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:49.470 14:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.470 14:33:56 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:49.470 14:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.470 14:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:49.470 delay0 00:19:49.470 14:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.470 14:33:56 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:49.470 14:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.470 14:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:49.470 14:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.470 14:33:56 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:49.470 [2024-12-06 14:33:56.391263] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:56.049 Initializing NVMe Controllers 00:19:56.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:56.049 Initialization complete. Launching workers. 00:19:56.049 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:19:56.049 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 345, failed to submit 41 00:19:56.049 success 149, unsuccess 196, failed 0 00:19:56.049 14:34:02 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:56.049 14:34:02 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:56.049 14:34:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:56.049 14:34:02 -- nvmf/common.sh@116 -- # sync 00:19:56.049 14:34:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:56.049 14:34:02 -- nvmf/common.sh@119 -- # set +e 00:19:56.049 14:34:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:56.049 14:34:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:56.049 rmmod nvme_tcp 00:19:56.049 rmmod nvme_fabrics 00:19:56.049 rmmod nvme_keyring 00:19:56.049 14:34:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:56.049 14:34:02 -- nvmf/common.sh@123 -- # set -e 00:19:56.049 14:34:02 -- nvmf/common.sh@124 -- # return 0 00:19:56.049 14:34:02 -- nvmf/common.sh@477 -- # '[' -n 75924 ']' 00:19:56.049 14:34:02 -- nvmf/common.sh@478 -- # killprocess 75924 00:19:56.049 14:34:02 -- common/autotest_common.sh@936 -- # '[' -z 75924 ']' 00:19:56.049 14:34:02 -- common/autotest_common.sh@940 -- # kill -0 75924 00:19:56.049 14:34:02 -- common/autotest_common.sh@941 -- # uname 00:19:56.049 14:34:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.049 14:34:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75924 00:19:56.049 14:34:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:56.049 killing process with pid 75924 00:19:56.049 14:34:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:56.049 14:34:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75924' 00:19:56.049 14:34:02 -- common/autotest_common.sh@955 -- # kill 75924 00:19:56.049 14:34:02 -- common/autotest_common.sh@960 -- # wait 75924 00:19:56.049 14:34:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.049 14:34:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.049 14:34:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.049 14:34:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.049 14:34:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.049 14:34:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.049 14:34:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.049 14:34:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.049 14:34:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:56.049 00:19:56.049 real 0m25.555s 00:19:56.049 user 0m39.826s 00:19:56.049 sys 0m7.831s 00:19:56.049 14:34:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.049 14:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:56.049 ************************************ 00:19:56.049 END TEST nvmf_zcopy 00:19:56.049 ************************************ 00:19:56.049 14:34:02 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:56.049 14:34:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.049 14:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.049 14:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:56.049 ************************************ 00:19:56.049 START TEST nvmf_nmic 00:19:56.049 ************************************ 00:19:56.049 14:34:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:56.308 * Looking for test storage... 00:19:56.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:56.308 14:34:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:56.308 14:34:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:56.308 14:34:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:56.308 14:34:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:56.308 14:34:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:56.308 14:34:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:56.308 14:34:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:56.308 14:34:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:56.308 14:34:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:56.308 14:34:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.308 14:34:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:56.308 14:34:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:56.308 14:34:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:56.308 14:34:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:56.309 14:34:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:56.309 14:34:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:56.309 14:34:03 -- scripts/common.sh@344 -- # : 1 00:19:56.309 14:34:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:56.309 14:34:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.309 14:34:03 -- scripts/common.sh@364 -- # decimal 1 00:19:56.309 14:34:03 -- scripts/common.sh@352 -- # local d=1 00:19:56.309 14:34:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.309 14:34:03 -- scripts/common.sh@354 -- # echo 1 00:19:56.309 14:34:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:56.309 14:34:03 -- scripts/common.sh@365 -- # decimal 2 00:19:56.309 14:34:03 -- scripts/common.sh@352 -- # local d=2 00:19:56.309 14:34:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.309 14:34:03 -- scripts/common.sh@354 -- # echo 2 00:19:56.309 14:34:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:56.309 14:34:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:56.309 14:34:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:56.309 14:34:03 -- scripts/common.sh@367 -- # return 0 00:19:56.309 14:34:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.309 14:34:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:56.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.309 --rc genhtml_branch_coverage=1 00:19:56.309 --rc genhtml_function_coverage=1 00:19:56.309 --rc genhtml_legend=1 00:19:56.309 --rc geninfo_all_blocks=1 00:19:56.309 --rc geninfo_unexecuted_blocks=1 00:19:56.309 00:19:56.309 ' 00:19:56.309 14:34:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:56.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.309 --rc genhtml_branch_coverage=1 00:19:56.309 --rc genhtml_function_coverage=1 00:19:56.309 --rc genhtml_legend=1 00:19:56.309 --rc geninfo_all_blocks=1 00:19:56.309 --rc geninfo_unexecuted_blocks=1 00:19:56.309 00:19:56.309 ' 00:19:56.309 14:34:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:56.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.309 --rc genhtml_branch_coverage=1 00:19:56.309 --rc genhtml_function_coverage=1 00:19:56.309 --rc genhtml_legend=1 00:19:56.309 --rc geninfo_all_blocks=1 00:19:56.309 --rc geninfo_unexecuted_blocks=1 00:19:56.309 00:19:56.309 ' 00:19:56.309 14:34:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:56.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.309 --rc genhtml_branch_coverage=1 00:19:56.309 --rc genhtml_function_coverage=1 00:19:56.309 --rc genhtml_legend=1 00:19:56.309 --rc geninfo_all_blocks=1 00:19:56.309 --rc geninfo_unexecuted_blocks=1 00:19:56.309 00:19:56.309 ' 00:19:56.309 14:34:03 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.309 14:34:03 -- nvmf/common.sh@7 -- # uname -s 00:19:56.309 14:34:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.309 14:34:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.309 14:34:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.309 14:34:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.309 14:34:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.309 14:34:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.309 14:34:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.309 14:34:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.309 14:34:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.309 14:34:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.309 14:34:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:19:56.309 14:34:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:19:56.309 14:34:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.309 14:34:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.309 14:34:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.309 14:34:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.309 14:34:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.309 14:34:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.309 14:34:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.309 14:34:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.309 14:34:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.309 14:34:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.309 14:34:03 -- paths/export.sh@5 -- # export PATH 00:19:56.309 14:34:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.309 14:34:03 -- nvmf/common.sh@46 -- # : 0 00:19:56.309 14:34:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.309 14:34:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.309 14:34:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.309 14:34:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.309 14:34:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.309 14:34:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.309 14:34:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.309 14:34:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.309 14:34:03 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.309 14:34:03 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.309 14:34:03 -- target/nmic.sh@14 -- # nvmftestinit 00:19:56.309 14:34:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:56.309 14:34:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.309 14:34:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:56.309 14:34:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:56.309 14:34:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:56.309 14:34:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.309 14:34:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.309 14:34:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.309 14:34:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:56.309 14:34:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:56.309 14:34:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:56.309 14:34:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:56.309 14:34:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:56.309 14:34:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:56.309 14:34:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.309 14:34:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.309 14:34:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.309 14:34:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:56.309 14:34:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.309 14:34:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.309 14:34:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.309 14:34:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.309 14:34:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.309 14:34:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.309 14:34:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.309 14:34:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.309 14:34:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:56.309 14:34:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:56.309 Cannot find device "nvmf_tgt_br" 00:19:56.309 14:34:03 -- nvmf/common.sh@154 -- # true 00:19:56.309 14:34:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.309 Cannot find device "nvmf_tgt_br2" 00:19:56.309 14:34:03 -- nvmf/common.sh@155 -- # true 00:19:56.309 14:34:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:56.309 14:34:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:56.309 Cannot find device "nvmf_tgt_br" 00:19:56.309 14:34:03 -- nvmf/common.sh@157 -- # true 00:19:56.309 14:34:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:56.309 Cannot find device "nvmf_tgt_br2" 00:19:56.309 14:34:03 -- nvmf/common.sh@158 -- # true 00:19:56.309 14:34:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:56.568 14:34:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:56.568 14:34:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.568 14:34:03 -- nvmf/common.sh@161 -- # true 00:19:56.568 14:34:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.568 14:34:03 -- nvmf/common.sh@162 -- # true 00:19:56.568 14:34:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.568 14:34:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.568 14:34:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.568 14:34:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.568 14:34:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.568 14:34:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.568 14:34:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.568 14:34:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.568 14:34:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.568 14:34:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:56.568 14:34:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:56.568 14:34:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:56.568 14:34:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:56.568 14:34:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.568 14:34:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.568 14:34:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.568 14:34:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:56.568 14:34:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:56.568 14:34:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.568 14:34:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.569 14:34:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.569 14:34:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.569 14:34:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.569 14:34:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:56.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:56.569 00:19:56.569 --- 10.0.0.2 ping statistics --- 00:19:56.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.569 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:56.569 14:34:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:56.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:19:56.569 00:19:56.569 --- 10.0.0.3 ping statistics --- 00:19:56.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.569 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:56.569 14:34:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:56.569 00:19:56.569 --- 10.0.0.1 ping statistics --- 00:19:56.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.569 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:56.569 14:34:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.569 14:34:03 -- nvmf/common.sh@421 -- # return 0 00:19:56.569 14:34:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.569 14:34:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.569 14:34:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.569 14:34:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.569 14:34:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.569 14:34:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.569 14:34:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.828 14:34:03 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:56.828 14:34:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.828 14:34:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.828 14:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:56.828 14:34:03 -- nvmf/common.sh@469 -- # nvmfpid=76423 00:19:56.828 14:34:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.828 14:34:03 -- nvmf/common.sh@470 -- # waitforlisten 76423 00:19:56.828 14:34:03 -- common/autotest_common.sh@829 -- # '[' -z 76423 ']' 00:19:56.828 14:34:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.828 14:34:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.828 14:34:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.828 14:34:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.828 14:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:56.828 [2024-12-06 14:34:03.600398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:56.828 [2024-12-06 14:34:03.600534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.828 [2024-12-06 14:34:03.736381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.087 [2024-12-06 14:34:03.828249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:57.087 [2024-12-06 14:34:03.828395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.087 [2024-12-06 14:34:03.828407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.087 [2024-12-06 14:34:03.828432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.087 [2024-12-06 14:34:03.828557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.087 [2024-12-06 14:34:03.829096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.087 [2024-12-06 14:34:03.829256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.087 [2024-12-06 14:34:03.829261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.022 14:34:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.022 14:34:04 -- common/autotest_common.sh@862 -- # return 0 00:19:58.022 14:34:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:58.022 14:34:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.022 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.022 14:34:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.022 14:34:04 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.022 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.022 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.022 [2024-12-06 14:34:04.698680] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.022 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.022 14:34:04 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.022 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.022 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.022 Malloc0 00:19:58.022 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.022 14:34:04 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:58.022 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.022 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.022 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.022 14:34:04 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.023 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.023 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.023 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.023 14:34:04 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.023 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.023 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.023 [2024-12-06 14:34:04.766281] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.023 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.023 test case1: single bdev can't be used in multiple subsystems 00:19:58.023 14:34:04 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:58.023 14:34:04 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:58.023 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.023 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.023 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.023 14:34:04 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:58.023 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.023 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.023 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.023 14:34:04 -- target/nmic.sh@28 -- # nmic_status=0 00:19:58.023 14:34:04 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:58.023 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.023 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.023 [2024-12-06 14:34:04.794069] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:58.023 [2024-12-06 14:34:04.794109] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:58.023 [2024-12-06 14:34:04.794121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.023 2024/12/06 14:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:58.023 request: 00:19:58.023 { 00:19:58.023 "method": "nvmf_subsystem_add_ns", 00:19:58.023 "params": { 00:19:58.023 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.023 "namespace": { 00:19:58.023 "bdev_name": "Malloc0" 00:19:58.023 } 00:19:58.023 } 00:19:58.023 } 00:19:58.023 Got JSON-RPC error response 00:19:58.023 GoRPCClient: error on JSON-RPC call 00:19:58.023 14:34:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:58.023 14:34:04 -- target/nmic.sh@29 -- # nmic_status=1 00:19:58.023 14:34:04 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:58.023 Adding namespace failed - expected result. 00:19:58.023 14:34:04 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:58.023 test case2: host connect to nvmf target in multiple paths 00:19:58.023 14:34:04 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:58.023 14:34:04 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:58.023 14:34:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.023 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:58.023 [2024-12-06 14:34:04.806225] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:58.023 14:34:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.023 14:34:04 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:58.023 14:34:04 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:58.293 14:34:05 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:58.293 14:34:05 -- common/autotest_common.sh@1187 -- # local i=0 00:19:58.293 14:34:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:58.293 14:34:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:58.293 14:34:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:00.828 14:34:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:00.828 14:34:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:00.828 14:34:07 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:00.828 14:34:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:00.828 14:34:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:00.828 14:34:07 -- common/autotest_common.sh@1197 -- # return 0 00:20:00.828 14:34:07 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:00.828 [global] 00:20:00.828 thread=1 00:20:00.828 invalidate=1 00:20:00.828 rw=write 00:20:00.828 time_based=1 00:20:00.828 runtime=1 00:20:00.828 ioengine=libaio 00:20:00.828 direct=1 00:20:00.828 bs=4096 00:20:00.828 iodepth=1 00:20:00.828 norandommap=0 00:20:00.828 numjobs=1 00:20:00.828 00:20:00.828 verify_dump=1 00:20:00.828 verify_backlog=512 00:20:00.828 verify_state_save=0 00:20:00.828 do_verify=1 00:20:00.828 verify=crc32c-intel 00:20:00.828 [job0] 00:20:00.828 filename=/dev/nvme0n1 00:20:00.828 Could not set queue depth (nvme0n1) 00:20:00.828 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:00.828 fio-3.35 00:20:00.828 Starting 1 thread 00:20:01.765 00:20:01.765 job0: (groupid=0, jobs=1): err= 0: pid=76533: Fri Dec 6 14:34:08 2024 00:20:01.765 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:20:01.765 slat (nsec): min=12170, max=57653, avg=16127.75, stdev=5606.84 00:20:01.765 clat (usec): min=120, max=473, avg=152.49, stdev=22.08 00:20:01.765 lat (usec): min=136, max=489, avg=168.62, stdev=23.09 00:20:01.765 clat percentiles (usec): 00:20:01.765 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:20:01.765 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:20:01.765 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 192], 00:20:01.765 | 99.00th=[ 212], 99.50th=[ 235], 99.90th=[ 343], 99.95th=[ 433], 00:20:01.765 | 99.99th=[ 474] 00:20:01.765 write: IOPS=3500, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:20:01.765 slat (usec): min=18, max=164, avg=25.38, stdev= 8.30 00:20:01.765 clat (usec): min=83, max=259, avg=109.38, stdev=15.90 00:20:01.765 lat (usec): min=108, max=413, avg=134.77, stdev=18.70 00:20:01.765 clat percentiles (usec): 00:20:01.765 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:20:01.765 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 109], 00:20:01.765 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 141], 00:20:01.765 | 99.00th=[ 161], 99.50th=[ 176], 99.90th=[ 227], 99.95th=[ 249], 00:20:01.765 | 99.99th=[ 260] 00:20:01.765 bw ( KiB/s): min=13373, max=13373, per=95.51%, avg=13373.00, stdev= 0.00, samples=1 00:20:01.765 iops : min= 3343, max= 3343, avg=3343.00, stdev= 0.00, samples=1 00:20:01.765 lat (usec) : 100=16.16%, 250=83.61%, 500=0.23% 00:20:01.765 cpu : usr=2.30%, sys=9.80%, ctx=6577, majf=0, minf=5 00:20:01.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.765 issued rwts: total=3072,3504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.765 00:20:01.765 Run status group 0 (all jobs): 00:20:01.765 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:20:01.765 WRITE: bw=13.7MiB/s (14.3MB/s), 13.7MiB/s-13.7MiB/s (14.3MB/s-14.3MB/s), io=13.7MiB (14.4MB), run=1001-1001msec 00:20:01.765 00:20:01.765 Disk stats (read/write): 00:20:01.765 nvme0n1: ios=2848/3072, merge=0/0, ticks=458/392, in_queue=850, util=91.08% 00:20:01.765 14:34:08 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:01.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:01.765 14:34:08 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:01.765 14:34:08 -- common/autotest_common.sh@1208 -- # local i=0 00:20:01.765 14:34:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:01.765 14:34:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:01.765 14:34:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:01.765 14:34:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:01.765 14:34:08 -- common/autotest_common.sh@1220 -- # return 0 00:20:01.765 14:34:08 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:01.765 14:34:08 -- target/nmic.sh@53 -- # nvmftestfini 00:20:01.765 14:34:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:01.765 14:34:08 -- nvmf/common.sh@116 -- # sync 00:20:01.765 14:34:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:01.765 14:34:08 -- nvmf/common.sh@119 -- # set +e 00:20:01.765 14:34:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:01.765 14:34:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:01.765 rmmod nvme_tcp 00:20:01.765 rmmod nvme_fabrics 00:20:01.765 rmmod nvme_keyring 00:20:01.765 14:34:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:01.765 14:34:08 -- nvmf/common.sh@123 -- # set -e 00:20:01.765 14:34:08 -- nvmf/common.sh@124 -- # return 0 00:20:01.765 14:34:08 -- nvmf/common.sh@477 -- # '[' -n 76423 ']' 00:20:01.765 14:34:08 -- nvmf/common.sh@478 -- # killprocess 76423 00:20:01.765 14:34:08 -- common/autotest_common.sh@936 -- # '[' -z 76423 ']' 00:20:01.765 14:34:08 -- common/autotest_common.sh@940 -- # kill -0 76423 00:20:01.765 14:34:08 -- common/autotest_common.sh@941 -- # uname 00:20:01.765 14:34:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.765 14:34:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76423 00:20:01.765 14:34:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:01.765 14:34:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:01.765 killing process with pid 76423 00:20:01.765 14:34:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76423' 00:20:01.765 14:34:08 -- common/autotest_common.sh@955 -- # kill 76423 00:20:01.765 14:34:08 -- common/autotest_common.sh@960 -- # wait 76423 00:20:02.024 14:34:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:02.024 14:34:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:02.024 14:34:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:02.024 14:34:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.024 14:34:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:02.024 14:34:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.024 14:34:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.024 14:34:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.283 14:34:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:02.283 00:20:02.283 real 0m6.072s 00:20:02.283 user 0m20.123s 00:20:02.283 sys 0m1.491s 00:20:02.283 14:34:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:02.283 14:34:09 -- common/autotest_common.sh@10 -- # set +x 00:20:02.283 ************************************ 00:20:02.283 END TEST nvmf_nmic 00:20:02.283 ************************************ 00:20:02.283 14:34:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:02.283 14:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:02.283 14:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.283 14:34:09 -- common/autotest_common.sh@10 -- # set +x 00:20:02.283 ************************************ 00:20:02.283 START TEST nvmf_fio_target 00:20:02.283 ************************************ 00:20:02.283 14:34:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:02.283 * Looking for test storage... 00:20:02.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:02.283 14:34:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:02.283 14:34:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:02.283 14:34:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:02.283 14:34:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:02.283 14:34:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:02.283 14:34:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:02.283 14:34:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:02.283 14:34:09 -- scripts/common.sh@335 -- # IFS=.-: 00:20:02.283 14:34:09 -- scripts/common.sh@335 -- # read -ra ver1 00:20:02.284 14:34:09 -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.284 14:34:09 -- scripts/common.sh@336 -- # read -ra ver2 00:20:02.284 14:34:09 -- scripts/common.sh@337 -- # local 'op=<' 00:20:02.284 14:34:09 -- scripts/common.sh@339 -- # ver1_l=2 00:20:02.284 14:34:09 -- scripts/common.sh@340 -- # ver2_l=1 00:20:02.284 14:34:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:02.284 14:34:09 -- scripts/common.sh@343 -- # case "$op" in 00:20:02.284 14:34:09 -- scripts/common.sh@344 -- # : 1 00:20:02.284 14:34:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:02.284 14:34:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.284 14:34:09 -- scripts/common.sh@364 -- # decimal 1 00:20:02.284 14:34:09 -- scripts/common.sh@352 -- # local d=1 00:20:02.284 14:34:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.284 14:34:09 -- scripts/common.sh@354 -- # echo 1 00:20:02.284 14:34:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:02.284 14:34:09 -- scripts/common.sh@365 -- # decimal 2 00:20:02.284 14:34:09 -- scripts/common.sh@352 -- # local d=2 00:20:02.284 14:34:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.284 14:34:09 -- scripts/common.sh@354 -- # echo 2 00:20:02.284 14:34:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:02.284 14:34:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:02.284 14:34:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:02.284 14:34:09 -- scripts/common.sh@367 -- # return 0 00:20:02.284 14:34:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.284 14:34:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:02.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.284 --rc genhtml_branch_coverage=1 00:20:02.284 --rc genhtml_function_coverage=1 00:20:02.284 --rc genhtml_legend=1 00:20:02.284 --rc geninfo_all_blocks=1 00:20:02.284 --rc geninfo_unexecuted_blocks=1 00:20:02.284 00:20:02.284 ' 00:20:02.284 14:34:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:02.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.284 --rc genhtml_branch_coverage=1 00:20:02.284 --rc genhtml_function_coverage=1 00:20:02.284 --rc genhtml_legend=1 00:20:02.284 --rc geninfo_all_blocks=1 00:20:02.284 --rc geninfo_unexecuted_blocks=1 00:20:02.284 00:20:02.284 ' 00:20:02.284 14:34:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:02.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.284 --rc genhtml_branch_coverage=1 00:20:02.284 --rc genhtml_function_coverage=1 00:20:02.284 --rc genhtml_legend=1 00:20:02.284 --rc geninfo_all_blocks=1 00:20:02.284 --rc geninfo_unexecuted_blocks=1 00:20:02.284 00:20:02.284 ' 00:20:02.284 14:34:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:02.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.284 --rc genhtml_branch_coverage=1 00:20:02.284 --rc genhtml_function_coverage=1 00:20:02.284 --rc genhtml_legend=1 00:20:02.284 --rc geninfo_all_blocks=1 00:20:02.284 --rc geninfo_unexecuted_blocks=1 00:20:02.284 00:20:02.284 ' 00:20:02.284 14:34:09 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.543 14:34:09 -- nvmf/common.sh@7 -- # uname -s 00:20:02.543 14:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.543 14:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.543 14:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.543 14:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.543 14:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.543 14:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.543 14:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.543 14:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.543 14:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.543 14:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.543 14:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:02.543 14:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:02.543 14:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.543 14:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.543 14:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:02.543 14:34:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.543 14:34:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.543 14:34:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.543 14:34:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.543 14:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.543 14:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.543 14:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.543 14:34:09 -- paths/export.sh@5 -- # export PATH 00:20:02.543 14:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.543 14:34:09 -- nvmf/common.sh@46 -- # : 0 00:20:02.543 14:34:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:02.543 14:34:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:02.543 14:34:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:02.543 14:34:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.543 14:34:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.543 14:34:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:02.543 14:34:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:02.543 14:34:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:02.543 14:34:09 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.543 14:34:09 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.543 14:34:09 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:02.543 14:34:09 -- target/fio.sh@16 -- # nvmftestinit 00:20:02.543 14:34:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:02.543 14:34:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.543 14:34:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:02.543 14:34:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:02.543 14:34:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:02.543 14:34:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.543 14:34:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.543 14:34:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.543 14:34:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:02.543 14:34:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:02.543 14:34:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:02.543 14:34:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:02.543 14:34:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:02.544 14:34:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:02.544 14:34:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.544 14:34:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.544 14:34:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:02.544 14:34:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:02.544 14:34:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:02.544 14:34:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:02.544 14:34:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:02.544 14:34:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.544 14:34:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:02.544 14:34:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:02.544 14:34:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:02.544 14:34:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:02.544 14:34:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:02.544 14:34:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:02.544 Cannot find device "nvmf_tgt_br" 00:20:02.544 14:34:09 -- nvmf/common.sh@154 -- # true 00:20:02.544 14:34:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.544 Cannot find device "nvmf_tgt_br2" 00:20:02.544 14:34:09 -- nvmf/common.sh@155 -- # true 00:20:02.544 14:34:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:02.544 14:34:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:02.544 Cannot find device "nvmf_tgt_br" 00:20:02.544 14:34:09 -- nvmf/common.sh@157 -- # true 00:20:02.544 14:34:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:02.544 Cannot find device "nvmf_tgt_br2" 00:20:02.544 14:34:09 -- nvmf/common.sh@158 -- # true 00:20:02.544 14:34:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:02.544 14:34:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:02.544 14:34:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.544 14:34:09 -- nvmf/common.sh@161 -- # true 00:20:02.544 14:34:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.544 14:34:09 -- nvmf/common.sh@162 -- # true 00:20:02.544 14:34:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.544 14:34:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:02.544 14:34:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:02.544 14:34:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:02.544 14:34:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:02.544 14:34:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:02.804 14:34:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:02.804 14:34:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:02.804 14:34:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:02.804 14:34:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:02.804 14:34:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:02.804 14:34:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:02.804 14:34:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:02.804 14:34:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:02.804 14:34:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:02.804 14:34:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:02.804 14:34:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:02.804 14:34:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:02.804 14:34:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:02.804 14:34:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:02.804 14:34:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:02.804 14:34:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:02.804 14:34:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:02.804 14:34:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:02.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:02.804 00:20:02.804 --- 10.0.0.2 ping statistics --- 00:20:02.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.804 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:02.804 14:34:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:02.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:02.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:20:02.804 00:20:02.804 --- 10.0.0.3 ping statistics --- 00:20:02.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.804 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:02.804 14:34:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:02.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:20:02.804 00:20:02.804 --- 10.0.0.1 ping statistics --- 00:20:02.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.804 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:02.804 14:34:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.804 14:34:09 -- nvmf/common.sh@421 -- # return 0 00:20:02.804 14:34:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:02.804 14:34:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.804 14:34:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:02.804 14:34:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:02.804 14:34:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.804 14:34:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:02.804 14:34:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:02.804 14:34:09 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:02.804 14:34:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:02.804 14:34:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.804 14:34:09 -- common/autotest_common.sh@10 -- # set +x 00:20:02.804 14:34:09 -- nvmf/common.sh@469 -- # nvmfpid=76719 00:20:02.804 14:34:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:02.804 14:34:09 -- nvmf/common.sh@470 -- # waitforlisten 76719 00:20:02.804 14:34:09 -- common/autotest_common.sh@829 -- # '[' -z 76719 ']' 00:20:02.804 14:34:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.804 14:34:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.804 14:34:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.804 14:34:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.804 14:34:09 -- common/autotest_common.sh@10 -- # set +x 00:20:02.804 [2024-12-06 14:34:09.736743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:02.804 [2024-12-06 14:34:09.736881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.064 [2024-12-06 14:34:09.879567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.064 [2024-12-06 14:34:10.007692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:03.064 [2024-12-06 14:34:10.007881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.064 [2024-12-06 14:34:10.007897] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.064 [2024-12-06 14:34:10.007908] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.064 [2024-12-06 14:34:10.008118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.064 [2024-12-06 14:34:10.008703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.064 [2024-12-06 14:34:10.010470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.064 [2024-12-06 14:34:10.010504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.000 14:34:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.000 14:34:10 -- common/autotest_common.sh@862 -- # return 0 00:20:04.000 14:34:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:04.000 14:34:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.000 14:34:10 -- common/autotest_common.sh@10 -- # set +x 00:20:04.000 14:34:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.000 14:34:10 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:04.265 [2024-12-06 14:34:11.087577] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.265 14:34:11 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.539 14:34:11 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:04.539 14:34:11 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.798 14:34:11 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:04.798 14:34:11 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.061 14:34:11 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:05.061 14:34:11 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.322 14:34:12 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:05.322 14:34:12 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:05.581 14:34:12 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.149 14:34:12 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:06.149 14:34:12 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.409 14:34:13 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:06.409 14:34:13 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.667 14:34:13 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:06.667 14:34:13 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:06.667 14:34:13 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:06.925 14:34:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:06.925 14:34:13 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.184 14:34:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:07.184 14:34:14 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:07.442 14:34:14 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.701 [2024-12-06 14:34:14.546175] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.701 14:34:14 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:07.960 14:34:14 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:08.218 14:34:15 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:08.477 14:34:15 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:08.477 14:34:15 -- common/autotest_common.sh@1187 -- # local i=0 00:20:08.477 14:34:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:08.477 14:34:15 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:20:08.477 14:34:15 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:20:08.477 14:34:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:10.377 14:34:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:10.377 14:34:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:10.377 14:34:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:10.377 14:34:17 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:20:10.377 14:34:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:10.377 14:34:17 -- common/autotest_common.sh@1197 -- # return 0 00:20:10.377 14:34:17 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:10.377 [global] 00:20:10.377 thread=1 00:20:10.377 invalidate=1 00:20:10.377 rw=write 00:20:10.377 time_based=1 00:20:10.377 runtime=1 00:20:10.377 ioengine=libaio 00:20:10.377 direct=1 00:20:10.377 bs=4096 00:20:10.377 iodepth=1 00:20:10.377 norandommap=0 00:20:10.377 numjobs=1 00:20:10.377 00:20:10.377 verify_dump=1 00:20:10.377 verify_backlog=512 00:20:10.377 verify_state_save=0 00:20:10.377 do_verify=1 00:20:10.377 verify=crc32c-intel 00:20:10.377 [job0] 00:20:10.377 filename=/dev/nvme0n1 00:20:10.377 [job1] 00:20:10.377 filename=/dev/nvme0n2 00:20:10.377 [job2] 00:20:10.377 filename=/dev/nvme0n3 00:20:10.377 [job3] 00:20:10.377 filename=/dev/nvme0n4 00:20:10.377 Could not set queue depth (nvme0n1) 00:20:10.377 Could not set queue depth (nvme0n2) 00:20:10.377 Could not set queue depth (nvme0n3) 00:20:10.377 Could not set queue depth (nvme0n4) 00:20:10.635 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.635 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.635 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.636 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.636 fio-3.35 00:20:10.636 Starting 4 threads 00:20:12.009 00:20:12.009 job0: (groupid=0, jobs=1): err= 0: pid=77017: Fri Dec 6 14:34:18 2024 00:20:12.009 read: IOPS=2092, BW=8372KiB/s (8573kB/s)(8380KiB/1001msec) 00:20:12.009 slat (nsec): min=11644, max=41352, avg=14009.18, stdev=2829.98 00:20:12.009 clat (usec): min=130, max=927, avg=215.55, stdev=61.28 00:20:12.009 lat (usec): min=143, max=941, avg=229.56, stdev=61.92 00:20:12.009 clat percentiles (usec): 00:20:12.009 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:20:12.009 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 194], 60.00th=[ 260], 00:20:12.009 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:20:12.009 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 644], 99.95th=[ 660], 00:20:12.009 | 99.99th=[ 930] 00:20:12.009 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:20:12.009 slat (usec): min=17, max=104, avg=22.70, stdev= 5.59 00:20:12.009 clat (usec): min=101, max=334, avg=177.31, stdev=45.19 00:20:12.009 lat (usec): min=119, max=439, avg=200.01, stdev=47.38 00:20:12.009 clat percentiles (usec): 00:20:12.009 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 120], 20.00th=[ 126], 00:20:12.009 | 30.00th=[ 133], 40.00th=[ 149], 50.00th=[ 200], 60.00th=[ 208], 00:20:12.009 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 237], 00:20:12.009 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 269], 99.95th=[ 293], 00:20:12.009 | 99.99th=[ 334] 00:20:12.009 bw ( KiB/s): min= 8192, max= 8192, per=18.77%, avg=8192.00, stdev= 0.00, samples=1 00:20:12.009 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:12.009 lat (usec) : 250=78.00%, 500=21.93%, 750=0.04%, 1000=0.02% 00:20:12.009 cpu : usr=1.30%, sys=6.90%, ctx=4655, majf=0, minf=9 00:20:12.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.009 issued rwts: total=2095,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.009 job1: (groupid=0, jobs=1): err= 0: pid=77018: Fri Dec 6 14:34:18 2024 00:20:12.009 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:20:12.009 slat (usec): min=12, max=109, avg=19.94, stdev= 7.12 00:20:12.009 clat (usec): min=135, max=1080, avg=217.86, stdev=53.85 00:20:12.009 lat (usec): min=154, max=1097, avg=237.81, stdev=54.91 00:20:12.009 clat percentiles (usec): 00:20:12.009 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:20:12.009 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 215], 60.00th=[ 247], 00:20:12.009 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:20:12.009 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 420], 99.95th=[ 644], 00:20:12.009 | 99.99th=[ 1074] 00:20:12.009 write: IOPS=2213, BW=8855KiB/s (9068kB/s)(8864KiB/1001msec); 0 zone resets 00:20:12.009 slat (usec): min=18, max=168, avg=30.16, stdev=10.46 00:20:12.009 clat (usec): min=93, max=34530, avg=197.27, stdev=730.76 00:20:12.009 lat (usec): min=127, max=34552, avg=227.43, stdev=730.64 00:20:12.009 clat percentiles (usec): 00:20:12.009 | 1.00th=[ 113], 5.00th=[ 119], 10.00th=[ 124], 20.00th=[ 133], 00:20:12.009 | 30.00th=[ 147], 40.00th=[ 184], 50.00th=[ 196], 60.00th=[ 202], 00:20:12.009 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 233], 00:20:12.009 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 273], 99.95th=[ 306], 00:20:12.009 | 99.99th=[34341] 00:20:12.009 bw ( KiB/s): min= 8192, max= 8192, per=18.77%, avg=8192.00, stdev= 0.00, samples=1 00:20:12.009 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:12.009 lat (usec) : 100=0.02%, 250=81.36%, 500=18.55%, 750=0.02% 00:20:12.009 lat (msec) : 2=0.02%, 50=0.02% 00:20:12.009 cpu : usr=2.20%, sys=7.70%, ctx=4265, majf=0, minf=11 00:20:12.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.009 issued rwts: total=2048,2216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.009 job2: (groupid=0, jobs=1): err= 0: pid=77019: Fri Dec 6 14:34:18 2024 00:20:12.009 read: IOPS=2750, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:20:12.009 slat (nsec): min=11656, max=46846, avg=13748.70, stdev=2592.47 00:20:12.009 clat (usec): min=138, max=241, avg=168.95, stdev=12.75 00:20:12.009 lat (usec): min=150, max=253, avg=182.70, stdev=13.09 00:20:12.009 clat percentiles (usec): 00:20:12.009 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:20:12.009 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:20:12.009 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:20:12.009 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 237], 99.95th=[ 239], 00:20:12.010 | 99.99th=[ 241] 00:20:12.010 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:20:12.010 slat (nsec): min=17799, max=99757, avg=21670.26, stdev=5001.16 00:20:12.010 clat (usec): min=109, max=1984, avg=137.41, stdev=36.17 00:20:12.010 lat (usec): min=128, max=2012, avg=159.08, stdev=36.74 00:20:12.010 clat percentiles (usec): 00:20:12.010 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 127], 00:20:12.010 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:20:12.010 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:20:12.010 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 208], 99.95th=[ 429], 00:20:12.010 | 99.99th=[ 1991] 00:20:12.010 bw ( KiB/s): min=12288, max=12288, per=28.16%, avg=12288.00, stdev= 0.00, samples=1 00:20:12.010 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:12.010 lat (usec) : 250=99.95%, 500=0.03% 00:20:12.010 lat (msec) : 2=0.02% 00:20:12.010 cpu : usr=2.10%, sys=7.70%, ctx=5826, majf=0, minf=16 00:20:12.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.010 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.010 job3: (groupid=0, jobs=1): err= 0: pid=77020: Fri Dec 6 14:34:18 2024 00:20:12.010 read: IOPS=2736, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:20:12.010 slat (nsec): min=12242, max=53943, avg=15533.26, stdev=4005.84 00:20:12.010 clat (usec): min=138, max=2768, avg=168.79, stdev=53.10 00:20:12.010 lat (usec): min=152, max=2781, avg=184.32, stdev=53.21 00:20:12.010 clat percentiles (usec): 00:20:12.010 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:20:12.010 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:20:12.010 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 192], 00:20:12.010 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 478], 99.95th=[ 766], 00:20:12.010 | 99.99th=[ 2769] 00:20:12.010 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:20:12.010 slat (usec): min=18, max=182, avg=23.81, stdev= 6.88 00:20:12.010 clat (usec): min=103, max=245, avg=134.66, stdev=13.47 00:20:12.010 lat (usec): min=128, max=373, avg=158.47, stdev=15.31 00:20:12.010 clat percentiles (usec): 00:20:12.010 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:20:12.010 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:20:12.010 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:20:12.010 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 202], 99.95th=[ 237], 00:20:12.010 | 99.99th=[ 245] 00:20:12.010 bw ( KiB/s): min=12288, max=12312, per=28.19%, avg=12300.00, stdev=16.97, samples=2 00:20:12.010 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:20:12.010 lat (usec) : 250=99.93%, 500=0.03%, 1000=0.02% 00:20:12.010 lat (msec) : 4=0.02% 00:20:12.010 cpu : usr=2.20%, sys=8.00%, ctx=5811, majf=0, minf=3 00:20:12.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.010 issued rwts: total=2739,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.010 00:20:12.010 Run status group 0 (all jobs): 00:20:12.010 READ: bw=37.6MiB/s (39.4MB/s), 8184KiB/s-10.7MiB/s (8380kB/s-11.3MB/s), io=37.6MiB (39.5MB), run=1001-1001msec 00:20:12.010 WRITE: bw=42.6MiB/s (44.7MB/s), 8855KiB/s-12.0MiB/s (9068kB/s-12.6MB/s), io=42.7MiB (44.7MB), run=1001-1001msec 00:20:12.010 00:20:12.010 Disk stats (read/write): 00:20:12.010 nvme0n1: ios=1774/2048, merge=0/0, ticks=417/416, in_queue=833, util=87.58% 00:20:12.010 nvme0n2: ios=1549/1942, merge=0/0, ticks=366/422, in_queue=788, util=87.40% 00:20:12.010 nvme0n3: ios=2427/2560, merge=0/0, ticks=414/374, in_queue=788, util=89.20% 00:20:12.010 nvme0n4: ios=2414/2560, merge=0/0, ticks=420/376, in_queue=796, util=89.67% 00:20:12.010 14:34:18 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:12.010 [global] 00:20:12.010 thread=1 00:20:12.010 invalidate=1 00:20:12.010 rw=randwrite 00:20:12.010 time_based=1 00:20:12.010 runtime=1 00:20:12.010 ioengine=libaio 00:20:12.010 direct=1 00:20:12.010 bs=4096 00:20:12.010 iodepth=1 00:20:12.010 norandommap=0 00:20:12.010 numjobs=1 00:20:12.010 00:20:12.010 verify_dump=1 00:20:12.010 verify_backlog=512 00:20:12.010 verify_state_save=0 00:20:12.010 do_verify=1 00:20:12.010 verify=crc32c-intel 00:20:12.010 [job0] 00:20:12.010 filename=/dev/nvme0n1 00:20:12.010 [job1] 00:20:12.010 filename=/dev/nvme0n2 00:20:12.010 [job2] 00:20:12.010 filename=/dev/nvme0n3 00:20:12.010 [job3] 00:20:12.010 filename=/dev/nvme0n4 00:20:12.010 Could not set queue depth (nvme0n1) 00:20:12.010 Could not set queue depth (nvme0n2) 00:20:12.010 Could not set queue depth (nvme0n3) 00:20:12.010 Could not set queue depth (nvme0n4) 00:20:12.010 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.010 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.010 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.010 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:12.010 fio-3.35 00:20:12.010 Starting 4 threads 00:20:13.385 00:20:13.385 job0: (groupid=0, jobs=1): err= 0: pid=77073: Fri Dec 6 14:34:19 2024 00:20:13.385 read: IOPS=2786, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:20:13.385 slat (usec): min=12, max=167, avg=15.65, stdev= 6.68 00:20:13.385 clat (usec): min=83, max=999, avg=168.20, stdev=25.93 00:20:13.385 lat (usec): min=149, max=1013, avg=183.85, stdev=26.87 00:20:13.385 clat percentiles (usec): 00:20:13.385 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:20:13.385 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:20:13.385 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 198], 00:20:13.385 | 99.00th=[ 219], 99.50th=[ 265], 99.90th=[ 461], 99.95th=[ 578], 00:20:13.385 | 99.99th=[ 996] 00:20:13.385 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:20:13.385 slat (usec): min=18, max=147, avg=23.27, stdev= 7.16 00:20:13.385 clat (usec): min=101, max=569, avg=132.31, stdev=17.32 00:20:13.385 lat (usec): min=122, max=592, avg=155.57, stdev=18.74 00:20:13.385 clat percentiles (usec): 00:20:13.385 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 122], 00:20:13.385 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:20:13.385 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 159], 00:20:13.385 | 99.00th=[ 180], 99.50th=[ 198], 99.90th=[ 260], 99.95th=[ 379], 00:20:13.385 | 99.99th=[ 570] 00:20:13.385 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:20:13.385 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:13.385 lat (usec) : 100=0.02%, 250=99.64%, 500=0.29%, 750=0.03%, 1000=0.02% 00:20:13.385 cpu : usr=2.60%, sys=7.70%, ctx=5874, majf=0, minf=7 00:20:13.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.385 issued rwts: total=2789,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.385 job1: (groupid=0, jobs=1): err= 0: pid=77074: Fri Dec 6 14:34:19 2024 00:20:13.385 read: IOPS=1611, BW=6446KiB/s (6600kB/s)(6452KiB/1001msec) 00:20:13.385 slat (nsec): min=8672, max=54033, avg=12520.70, stdev=3203.86 00:20:13.385 clat (usec): min=186, max=719, avg=280.48, stdev=23.60 00:20:13.385 lat (usec): min=196, max=731, avg=293.00, stdev=23.65 00:20:13.385 clat percentiles (usec): 00:20:13.385 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:20:13.385 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:20:13.385 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 314], 00:20:13.385 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 478], 99.95th=[ 717], 00:20:13.385 | 99.99th=[ 717] 00:20:13.385 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:13.385 slat (usec): min=10, max=1076, avg=23.62, stdev=24.96 00:20:13.385 clat (usec): min=4, max=4248, avg=231.16, stdev=105.20 00:20:13.385 lat (usec): min=131, max=4295, avg=254.79, stdev=108.30 00:20:13.385 clat percentiles (usec): 00:20:13.385 | 1.00th=[ 126], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 212], 00:20:13.385 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:20:13.385 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:20:13.385 | 99.00th=[ 289], 99.50th=[ 326], 99.90th=[ 1188], 99.95th=[ 1778], 00:20:13.385 | 99.99th=[ 4228] 00:20:13.385 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:20:13.385 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:13.385 lat (usec) : 10=0.03%, 250=49.30%, 500=50.42%, 750=0.08%, 1000=0.05% 00:20:13.385 lat (msec) : 2=0.08%, 10=0.03% 00:20:13.385 cpu : usr=1.50%, sys=5.00%, ctx=3665, majf=0, minf=17 00:20:13.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.385 issued rwts: total=1613,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.385 job2: (groupid=0, jobs=1): err= 0: pid=77075: Fri Dec 6 14:34:19 2024 00:20:13.385 read: IOPS=2775, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:20:13.385 slat (nsec): min=11295, max=59188, avg=13091.36, stdev=2743.99 00:20:13.385 clat (usec): min=137, max=2002, avg=170.38, stdev=37.48 00:20:13.385 lat (usec): min=153, max=2015, avg=183.48, stdev=37.58 00:20:13.385 clat percentiles (usec): 00:20:13.385 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:20:13.385 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:20:13.385 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:20:13.385 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 241], 99.95th=[ 306], 00:20:13.386 | 99.99th=[ 2008] 00:20:13.386 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:20:13.386 slat (nsec): min=17444, max=97268, avg=20039.16, stdev=4588.02 00:20:13.386 clat (usec): min=105, max=245, avg=136.81, stdev=13.42 00:20:13.386 lat (usec): min=126, max=336, avg=156.85, stdev=14.24 00:20:13.386 clat percentiles (usec): 00:20:13.386 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:20:13.386 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:20:13.386 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:20:13.386 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 208], 00:20:13.386 | 99.99th=[ 245] 00:20:13.386 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:20:13.386 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:13.386 lat (usec) : 250=99.97%, 500=0.02% 00:20:13.386 lat (msec) : 4=0.02% 00:20:13.386 cpu : usr=1.60%, sys=7.70%, ctx=5854, majf=0, minf=14 00:20:13.386 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.386 issued rwts: total=2778,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.386 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.386 job3: (groupid=0, jobs=1): err= 0: pid=77076: Fri Dec 6 14:34:19 2024 00:20:13.386 read: IOPS=1642, BW=6569KiB/s (6727kB/s)(6576KiB/1001msec) 00:20:13.386 slat (nsec): min=9433, max=62095, avg=12198.37, stdev=3624.49 00:20:13.386 clat (usec): min=143, max=777, avg=279.90, stdev=27.15 00:20:13.386 lat (usec): min=156, max=795, avg=292.09, stdev=27.20 00:20:13.386 clat percentiles (usec): 00:20:13.386 | 1.00th=[ 190], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:20:13.386 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:20:13.386 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:20:13.386 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 537], 99.95th=[ 775], 00:20:13.386 | 99.99th=[ 775] 00:20:13.386 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:13.386 slat (usec): min=10, max=112, avg=22.26, stdev= 6.59 00:20:13.386 clat (usec): min=112, max=2724, avg=228.73, stdev=61.04 00:20:13.386 lat (usec): min=131, max=2747, avg=250.99, stdev=61.15 00:20:13.386 clat percentiles (usec): 00:20:13.386 | 1.00th=[ 133], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:20:13.386 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:20:13.386 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 265], 00:20:13.386 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 379], 99.95th=[ 635], 00:20:13.386 | 99.99th=[ 2737] 00:20:13.386 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:20:13.386 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:13.386 lat (usec) : 250=50.16%, 500=49.73%, 750=0.05%, 1000=0.03% 00:20:13.386 lat (msec) : 4=0.03% 00:20:13.386 cpu : usr=0.90%, sys=5.60%, ctx=3693, majf=0, minf=12 00:20:13.386 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.386 issued rwts: total=1644,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.386 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.386 00:20:13.386 Run status group 0 (all jobs): 00:20:13.386 READ: bw=34.4MiB/s (36.1MB/s), 6446KiB/s-10.9MiB/s (6600kB/s-11.4MB/s), io=34.5MiB (36.1MB), run=1001-1001msec 00:20:13.386 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:20:13.386 00:20:13.386 Disk stats (read/write): 00:20:13.386 nvme0n1: ios=2535/2560, merge=0/0, ticks=483/367, in_queue=850, util=88.68% 00:20:13.386 nvme0n2: ios=1585/1602, merge=0/0, ticks=440/369, in_queue=809, util=87.89% 00:20:13.386 nvme0n3: ios=2484/2560, merge=0/0, ticks=437/375, in_queue=812, util=89.36% 00:20:13.386 nvme0n4: ios=1536/1640, merge=0/0, ticks=423/391, in_queue=814, util=89.73% 00:20:13.386 14:34:20 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:13.386 [global] 00:20:13.386 thread=1 00:20:13.386 invalidate=1 00:20:13.386 rw=write 00:20:13.386 time_based=1 00:20:13.386 runtime=1 00:20:13.386 ioengine=libaio 00:20:13.386 direct=1 00:20:13.386 bs=4096 00:20:13.386 iodepth=128 00:20:13.386 norandommap=0 00:20:13.386 numjobs=1 00:20:13.386 00:20:13.386 verify_dump=1 00:20:13.386 verify_backlog=512 00:20:13.386 verify_state_save=0 00:20:13.386 do_verify=1 00:20:13.386 verify=crc32c-intel 00:20:13.386 [job0] 00:20:13.386 filename=/dev/nvme0n1 00:20:13.386 [job1] 00:20:13.386 filename=/dev/nvme0n2 00:20:13.386 [job2] 00:20:13.386 filename=/dev/nvme0n3 00:20:13.386 [job3] 00:20:13.386 filename=/dev/nvme0n4 00:20:13.386 Could not set queue depth (nvme0n1) 00:20:13.386 Could not set queue depth (nvme0n2) 00:20:13.386 Could not set queue depth (nvme0n3) 00:20:13.386 Could not set queue depth (nvme0n4) 00:20:13.386 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:13.386 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:13.386 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:13.386 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:13.386 fio-3.35 00:20:13.386 Starting 4 threads 00:20:14.763 00:20:14.763 job0: (groupid=0, jobs=1): err= 0: pid=77137: Fri Dec 6 14:34:21 2024 00:20:14.763 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(21.9MiB/1001msec) 00:20:14.763 slat (usec): min=4, max=2714, avg=84.38, stdev=387.21 00:20:14.763 clat (usec): min=362, max=13647, avg=11113.53, stdev=1069.49 00:20:14.763 lat (usec): min=2728, max=13657, avg=11197.90, stdev=1010.60 00:20:14.763 clat percentiles (usec): 00:20:14.763 | 1.00th=[ 6325], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10945], 00:20:14.763 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:20:14.763 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:20:14.763 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13566], 99.95th=[13566], 00:20:14.763 | 99.99th=[13698] 00:20:14.763 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:20:14.763 slat (usec): min=7, max=2899, avg=86.86, stdev=359.41 00:20:14.763 clat (usec): min=8491, max=14137, avg=11400.67, stdev=1150.80 00:20:14.763 lat (usec): min=8510, max=14159, avg=11487.53, stdev=1145.20 00:20:14.763 clat percentiles (usec): 00:20:14.763 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:20:14.763 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11600], 60.00th=[11994], 00:20:14.763 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[13042], 00:20:14.763 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14091], 99.95th=[14091], 00:20:14.763 | 99.99th=[14091] 00:20:14.763 bw ( KiB/s): min=23752, max=23752, per=36.45%, avg=23752.00, stdev= 0.00, samples=1 00:20:14.763 iops : min= 5938, max= 5938, avg=5938.00, stdev= 0.00, samples=1 00:20:14.763 lat (usec) : 500=0.01% 00:20:14.763 lat (msec) : 4=0.33%, 10=10.86%, 20=88.80% 00:20:14.763 cpu : usr=4.50%, sys=12.70%, ctx=786, majf=0, minf=7 00:20:14.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:14.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.763 issued rwts: total=5609,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.763 job1: (groupid=0, jobs=1): err= 0: pid=77138: Fri Dec 6 14:34:21 2024 00:20:14.763 read: IOPS=2202, BW=8810KiB/s (9021kB/s)(8836KiB/1003msec) 00:20:14.763 slat (usec): min=4, max=6487, avg=238.75, stdev=945.13 00:20:14.763 clat (usec): min=338, max=46863, avg=30059.54, stdev=7328.79 00:20:14.763 lat (usec): min=6148, max=46883, avg=30298.29, stdev=7309.85 00:20:14.763 clat percentiles (usec): 00:20:14.763 | 1.00th=[ 6718], 5.00th=[20055], 10.00th=[23200], 20.00th=[26346], 00:20:14.763 | 30.00th=[27395], 40.00th=[27657], 50.00th=[28443], 60.00th=[28705], 00:20:14.763 | 70.00th=[32375], 80.00th=[35914], 90.00th=[41681], 95.00th=[44303], 00:20:14.763 | 99.00th=[45351], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:20:14.763 | 99.99th=[46924] 00:20:14.763 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:20:14.763 slat (usec): min=14, max=7136, avg=176.90, stdev=854.15 00:20:14.763 clat (usec): min=13419, max=42215, avg=23379.35, stdev=4893.24 00:20:14.763 lat (usec): min=16662, max=42241, avg=23556.25, stdev=4858.33 00:20:14.763 clat percentiles (usec): 00:20:14.763 | 1.00th=[15926], 5.00th=[17171], 10.00th=[17433], 20.00th=[19268], 00:20:14.763 | 30.00th=[19530], 40.00th=[21365], 50.00th=[23462], 60.00th=[25035], 00:20:14.763 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:20:14.763 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:14.763 | 99.99th=[42206] 00:20:14.763 bw ( KiB/s): min= 9256, max=11224, per=15.71%, avg=10240.00, stdev=1391.59, samples=2 00:20:14.763 iops : min= 2314, max= 2806, avg=2560.00, stdev=347.90, samples=2 00:20:14.763 lat (usec) : 500=0.02% 00:20:14.763 lat (msec) : 10=0.67%, 20=21.26%, 50=78.05% 00:20:14.763 cpu : usr=2.69%, sys=7.19%, ctx=220, majf=0, minf=15 00:20:14.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:14.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.763 issued rwts: total=2209,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.763 job2: (groupid=0, jobs=1): err= 0: pid=77139: Fri Dec 6 14:34:21 2024 00:20:14.763 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:20:14.763 slat (usec): min=5, max=3318, avg=98.86, stdev=437.33 00:20:14.764 clat (usec): min=10025, max=16190, avg=13103.61, stdev=906.83 00:20:14.764 lat (usec): min=10177, max=18099, avg=13202.47, stdev=825.87 00:20:14.764 clat percentiles (usec): 00:20:14.764 | 1.00th=[10552], 5.00th=[10945], 10.00th=[11469], 20.00th=[12780], 00:20:14.764 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:20:14.764 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[14091], 00:20:14.764 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16057], 99.95th=[16057], 00:20:14.764 | 99.99th=[16188] 00:20:14.764 write: IOPS=5081, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1002msec); 0 zone resets 00:20:14.764 slat (usec): min=10, max=3265, avg=99.40, stdev=418.72 00:20:14.764 clat (usec): min=261, max=15745, avg=12971.03, stdev=1593.51 00:20:14.764 lat (usec): min=2991, max=16143, avg=13070.42, stdev=1579.29 00:20:14.764 clat percentiles (usec): 00:20:14.764 | 1.00th=[ 6849], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:20:14.764 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:20:14.764 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14615], 95.00th=[14877], 00:20:14.764 | 99.00th=[15401], 99.50th=[15664], 99.90th=[15795], 99.95th=[15795], 00:20:14.764 | 99.99th=[15795] 00:20:14.764 bw ( KiB/s): min=19240, max=20480, per=30.48%, avg=19860.00, stdev=876.81, samples=2 00:20:14.764 iops : min= 4810, max= 5120, avg=4965.00, stdev=219.20, samples=2 00:20:14.764 lat (usec) : 500=0.01% 00:20:14.764 lat (msec) : 4=0.37%, 10=0.34%, 20=99.28% 00:20:14.764 cpu : usr=3.80%, sys=14.59%, ctx=699, majf=0, minf=15 00:20:14.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:14.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.764 issued rwts: total=4608,5092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.764 job3: (groupid=0, jobs=1): err= 0: pid=77140: Fri Dec 6 14:34:21 2024 00:20:14.764 read: IOPS=2801, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1004msec) 00:20:14.764 slat (usec): min=5, max=8788, avg=173.50, stdev=795.29 00:20:14.764 clat (usec): min=429, max=34431, avg=22192.21, stdev=3961.33 00:20:14.764 lat (usec): min=5696, max=34471, avg=22365.70, stdev=4017.64 00:20:14.764 clat percentiles (usec): 00:20:14.764 | 1.00th=[ 6849], 5.00th=[16909], 10.00th=[19792], 20.00th=[20579], 00:20:14.764 | 30.00th=[20841], 40.00th=[20841], 50.00th=[21103], 60.00th=[21890], 00:20:14.764 | 70.00th=[23200], 80.00th=[24511], 90.00th=[28967], 95.00th=[29492], 00:20:14.764 | 99.00th=[29754], 99.50th=[30540], 99.90th=[34341], 99.95th=[34341], 00:20:14.764 | 99.99th=[34341] 00:20:14.764 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:20:14.764 slat (usec): min=13, max=6434, avg=158.37, stdev=763.81 00:20:14.764 clat (usec): min=12854, max=35339, avg=20811.13, stdev=4665.43 00:20:14.764 lat (usec): min=13687, max=35368, avg=20969.50, stdev=4732.32 00:20:14.764 clat percentiles (usec): 00:20:14.764 | 1.00th=[14877], 5.00th=[15795], 10.00th=[16057], 20.00th=[17171], 00:20:14.764 | 30.00th=[17957], 40.00th=[19268], 50.00th=[20317], 60.00th=[20841], 00:20:14.764 | 70.00th=[21365], 80.00th=[22152], 90.00th=[26608], 95.00th=[33424], 00:20:14.764 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:20:14.764 | 99.99th=[35390] 00:20:14.764 bw ( KiB/s): min=12288, max=12288, per=18.86%, avg=12288.00, stdev= 0.00, samples=2 00:20:14.764 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:20:14.764 lat (usec) : 500=0.02% 00:20:14.764 lat (msec) : 10=0.87%, 20=30.11%, 50=69.01% 00:20:14.764 cpu : usr=3.39%, sys=9.27%, ctx=244, majf=0, minf=15 00:20:14.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:20:14.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.764 issued rwts: total=2813,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.764 00:20:14.764 Run status group 0 (all jobs): 00:20:14.764 READ: bw=59.3MiB/s (62.2MB/s), 8810KiB/s-21.9MiB/s (9021kB/s-23.0MB/s), io=59.5MiB (62.4MB), run=1001-1004msec 00:20:14.764 WRITE: bw=63.6MiB/s (66.7MB/s), 9.97MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=63.9MiB (67.0MB), run=1001-1004msec 00:20:14.764 00:20:14.764 Disk stats (read/write): 00:20:14.764 nvme0n1: ios=4658/5029, merge=0/0, ticks=12165/12365, in_queue=24530, util=88.18% 00:20:14.764 nvme0n2: ios=2033/2048, merge=0/0, ticks=15509/10052, in_queue=25561, util=87.44% 00:20:14.764 nvme0n3: ios=4096/4201, merge=0/0, ticks=12480/11744, in_queue=24224, util=88.99% 00:20:14.764 nvme0n4: ios=2443/2560, merge=0/0, ticks=17753/15291, in_queue=33044, util=89.55% 00:20:14.764 14:34:21 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:14.764 [global] 00:20:14.764 thread=1 00:20:14.764 invalidate=1 00:20:14.764 rw=randwrite 00:20:14.764 time_based=1 00:20:14.764 runtime=1 00:20:14.764 ioengine=libaio 00:20:14.764 direct=1 00:20:14.764 bs=4096 00:20:14.764 iodepth=128 00:20:14.764 norandommap=0 00:20:14.764 numjobs=1 00:20:14.764 00:20:14.764 verify_dump=1 00:20:14.764 verify_backlog=512 00:20:14.764 verify_state_save=0 00:20:14.764 do_verify=1 00:20:14.764 verify=crc32c-intel 00:20:14.764 [job0] 00:20:14.764 filename=/dev/nvme0n1 00:20:14.764 [job1] 00:20:14.764 filename=/dev/nvme0n2 00:20:14.764 [job2] 00:20:14.764 filename=/dev/nvme0n3 00:20:14.764 [job3] 00:20:14.764 filename=/dev/nvme0n4 00:20:14.764 Could not set queue depth (nvme0n1) 00:20:14.764 Could not set queue depth (nvme0n2) 00:20:14.764 Could not set queue depth (nvme0n3) 00:20:14.764 Could not set queue depth (nvme0n4) 00:20:14.764 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.764 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.764 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.764 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.764 fio-3.35 00:20:14.764 Starting 4 threads 00:20:16.140 00:20:16.140 job0: (groupid=0, jobs=1): err= 0: pid=77193: Fri Dec 6 14:34:22 2024 00:20:16.140 read: IOPS=2279, BW=9118KiB/s (9337kB/s)(9200KiB/1009msec) 00:20:16.140 slat (usec): min=3, max=16083, avg=179.82, stdev=1071.69 00:20:16.140 clat (usec): min=3813, max=66917, avg=19746.81, stdev=10979.24 00:20:16.140 lat (usec): min=5572, max=66931, avg=19926.63, stdev=11073.01 00:20:16.140 clat percentiles (usec): 00:20:16.140 | 1.00th=[ 6521], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[11863], 00:20:16.140 | 30.00th=[12518], 40.00th=[12911], 50.00th=[15401], 60.00th=[16909], 00:20:16.140 | 70.00th=[21627], 80.00th=[28967], 90.00th=[35390], 95.00th=[42206], 00:20:16.140 | 99.00th=[55313], 99.50th=[61080], 99.90th=[66847], 99.95th=[66847], 00:20:16.140 | 99.99th=[66847] 00:20:16.140 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:20:16.140 slat (usec): min=4, max=16250, avg=222.90, stdev=1023.72 00:20:16.140 clat (msec): min=4, max=118, avg=32.20, stdev=23.04 00:20:16.140 lat (msec): min=4, max=118, avg=32.42, stdev=23.20 00:20:16.140 clat percentiles (msec): 00:20:16.140 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 20], 00:20:16.140 | 30.00th=[ 21], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:20:16.140 | 70.00th=[ 32], 80.00th=[ 45], 90.00th=[ 62], 95.00th=[ 85], 00:20:16.140 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 118], 99.95th=[ 118], 00:20:16.140 | 99.99th=[ 118] 00:20:16.140 bw ( KiB/s): min= 8384, max=12112, per=16.27%, avg=10248.00, stdev=2636.09, samples=2 00:20:16.140 iops : min= 2096, max= 3028, avg=2562.00, stdev=659.02, samples=2 00:20:16.140 lat (msec) : 4=0.02%, 10=8.21%, 20=37.04%, 50=45.45%, 100=7.37% 00:20:16.140 lat (msec) : 250=1.91% 00:20:16.140 cpu : usr=2.08%, sys=6.75%, ctx=345, majf=0, minf=16 00:20:16.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:16.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.140 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.140 job1: (groupid=0, jobs=1): err= 0: pid=77194: Fri Dec 6 14:34:22 2024 00:20:16.140 read: IOPS=6118, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1006msec) 00:20:16.140 slat (usec): min=4, max=9304, avg=75.73, stdev=491.07 00:20:16.140 clat (usec): min=4196, max=20919, avg=10291.25, stdev=2431.16 00:20:16.140 lat (usec): min=4209, max=20934, avg=10366.98, stdev=2460.99 00:20:16.140 clat percentiles (usec): 00:20:16.140 | 1.00th=[ 5145], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8455], 00:20:16.140 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:20:16.140 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13042], 95.00th=[15401], 00:20:16.140 | 99.00th=[19268], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:20:16.140 | 99.99th=[20841] 00:20:16.140 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:20:16.140 slat (usec): min=5, max=8410, avg=73.32, stdev=501.74 00:20:16.140 clat (usec): min=3643, max=20844, avg=9622.68, stdev=1932.12 00:20:16.140 lat (usec): min=3668, max=20855, avg=9696.00, stdev=1993.19 00:20:16.140 clat percentiles (usec): 00:20:16.141 | 1.00th=[ 4146], 5.00th=[ 5997], 10.00th=[ 7242], 20.00th=[ 8291], 00:20:16.141 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:20:16.141 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11731], 95.00th=[11994], 00:20:16.141 | 99.00th=[14222], 99.50th=[14877], 99.90th=[18482], 99.95th=[20579], 00:20:16.141 | 99.99th=[20841] 00:20:16.141 bw ( KiB/s): min=23656, max=28664, per=41.54%, avg=26160.00, stdev=3541.19, samples=2 00:20:16.141 iops : min= 5914, max= 7166, avg=6540.00, stdev=885.30, samples=2 00:20:16.141 lat (msec) : 4=0.34%, 10=52.44%, 20=46.94%, 50=0.28% 00:20:16.141 cpu : usr=5.67%, sys=13.73%, ctx=668, majf=0, minf=5 00:20:16.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.141 issued rwts: total=6155,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.141 job2: (groupid=0, jobs=1): err= 0: pid=77199: Fri Dec 6 14:34:22 2024 00:20:16.141 read: IOPS=2341, BW=9368KiB/s (9593kB/s)(9452KiB/1009msec) 00:20:16.141 slat (usec): min=7, max=16225, avg=162.30, stdev=910.44 00:20:16.141 clat (usec): min=399, max=53148, avg=19601.65, stdev=6307.00 00:20:16.141 lat (usec): min=11194, max=55956, avg=19763.95, stdev=6349.37 00:20:16.141 clat percentiles (usec): 00:20:16.141 | 1.00th=[11863], 5.00th=[14222], 10.00th=[14746], 20.00th=[15008], 00:20:16.141 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16712], 60.00th=[18744], 00:20:16.141 | 70.00th=[21890], 80.00th=[22938], 90.00th=[27132], 95.00th=[34341], 00:20:16.141 | 99.00th=[39060], 99.50th=[42730], 99.90th=[53216], 99.95th=[53216], 00:20:16.141 | 99.99th=[53216] 00:20:16.141 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:20:16.141 slat (usec): min=6, max=27225, avg=235.15, stdev=1220.35 00:20:16.141 clat (usec): min=13705, max=67335, avg=31141.56, stdev=11766.30 00:20:16.141 lat (usec): min=13729, max=67389, avg=31376.71, stdev=11841.23 00:20:16.141 clat percentiles (usec): 00:20:16.141 | 1.00th=[15533], 5.00th=[20317], 10.00th=[21627], 20.00th=[23462], 00:20:16.141 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[28705], 00:20:16.141 | 70.00th=[30802], 80.00th=[38536], 90.00th=[51643], 95.00th=[58983], 00:20:16.141 | 99.00th=[63701], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:20:16.141 | 99.99th=[67634] 00:20:16.141 bw ( KiB/s): min= 9680, max=10778, per=16.24%, avg=10229.00, stdev=776.40, samples=2 00:20:16.141 iops : min= 2420, max= 2694, avg=2557.00, stdev=193.75, samples=2 00:20:16.141 lat (usec) : 500=0.02% 00:20:16.141 lat (msec) : 20=32.40%, 50=61.51%, 100=6.07% 00:20:16.141 cpu : usr=2.48%, sys=7.84%, ctx=386, majf=0, minf=15 00:20:16.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.141 issued rwts: total=2363,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.141 job3: (groupid=0, jobs=1): err= 0: pid=77200: Fri Dec 6 14:34:22 2024 00:20:16.141 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:16.141 slat (usec): min=4, max=18623, avg=109.70, stdev=791.37 00:20:16.141 clat (usec): min=2271, max=47447, avg=14547.24, stdev=5856.03 00:20:16.141 lat (usec): min=3293, max=47486, avg=14656.93, stdev=5913.63 00:20:16.141 clat percentiles (usec): 00:20:16.141 | 1.00th=[ 6849], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11469], 00:20:16.141 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13173], 00:20:16.141 | 70.00th=[14615], 80.00th=[16319], 90.00th=[22414], 95.00th=[26346], 00:20:16.141 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:20:16.141 | 99.99th=[47449] 00:20:16.141 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1003msec); 0 zone resets 00:20:16.141 slat (usec): min=5, max=28036, avg=125.85, stdev=1002.43 00:20:16.141 clat (usec): min=1561, max=68320, avg=15965.63, stdev=8424.18 00:20:16.141 lat (usec): min=4490, max=68356, avg=16091.47, stdev=8537.64 00:20:16.141 clat percentiles (usec): 00:20:16.141 | 1.00th=[ 4948], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[11600], 00:20:16.141 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13566], 00:20:16.141 | 70.00th=[13960], 80.00th=[21365], 90.00th=[29230], 95.00th=[34341], 00:20:16.141 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50594], 00:20:16.141 | 99.99th=[68682] 00:20:16.141 bw ( KiB/s): min=12312, max=20480, per=26.04%, avg=16396.00, stdev=5775.65, samples=2 00:20:16.141 iops : min= 3078, max= 5120, avg=4099.00, stdev=1443.91, samples=2 00:20:16.141 lat (msec) : 2=0.01%, 4=0.07%, 10=9.68%, 20=71.61%, 50=17.96% 00:20:16.141 lat (msec) : 100=0.66% 00:20:16.141 cpu : usr=3.49%, sys=10.98%, ctx=447, majf=0, minf=11 00:20:16.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:16.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.141 issued rwts: total=4096,4109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.141 00:20:16.141 Run status group 0 (all jobs): 00:20:16.141 READ: bw=57.7MiB/s (60.5MB/s), 9118KiB/s-23.9MiB/s (9337kB/s-25.1MB/s), io=58.3MiB (61.1MB), run=1003-1009msec 00:20:16.141 WRITE: bw=61.5MiB/s (64.5MB/s), 9.91MiB/s-25.8MiB/s (10.4MB/s-27.1MB/s), io=62.1MiB (65.1MB), run=1003-1009msec 00:20:16.141 00:20:16.141 Disk stats (read/write): 00:20:16.141 nvme0n1: ios=1837/2048, merge=0/0, ticks=34361/69443, in_queue=103804, util=86.96% 00:20:16.141 nvme0n2: ios=5295/5632, merge=0/0, ticks=49273/49949, in_queue=99222, util=87.04% 00:20:16.141 nvme0n3: ios=2048/2095, merge=0/0, ticks=18741/30760, in_queue=49501, util=88.69% 00:20:16.141 nvme0n4: ios=3072/3534, merge=0/0, ticks=36696/41158, in_queue=77854, util=89.03% 00:20:16.141 14:34:22 -- target/fio.sh@55 -- # sync 00:20:16.141 14:34:22 -- target/fio.sh@59 -- # fio_pid=77213 00:20:16.141 14:34:22 -- target/fio.sh@61 -- # sleep 3 00:20:16.141 14:34:22 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:16.141 [global] 00:20:16.141 thread=1 00:20:16.141 invalidate=1 00:20:16.141 rw=read 00:20:16.141 time_based=1 00:20:16.141 runtime=10 00:20:16.141 ioengine=libaio 00:20:16.141 direct=1 00:20:16.141 bs=4096 00:20:16.141 iodepth=1 00:20:16.141 norandommap=1 00:20:16.141 numjobs=1 00:20:16.141 00:20:16.141 [job0] 00:20:16.141 filename=/dev/nvme0n1 00:20:16.141 [job1] 00:20:16.141 filename=/dev/nvme0n2 00:20:16.141 [job2] 00:20:16.141 filename=/dev/nvme0n3 00:20:16.141 [job3] 00:20:16.141 filename=/dev/nvme0n4 00:20:16.141 Could not set queue depth (nvme0n1) 00:20:16.141 Could not set queue depth (nvme0n2) 00:20:16.141 Could not set queue depth (nvme0n3) 00:20:16.141 Could not set queue depth (nvme0n4) 00:20:16.141 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.141 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.141 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.141 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.141 fio-3.35 00:20:16.141 Starting 4 threads 00:20:19.426 14:34:25 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:19.426 fio: pid=77257, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:19.426 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35913728, buflen=4096 00:20:19.426 14:34:26 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:19.426 fio: pid=77256, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:19.426 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69885952, buflen=4096 00:20:19.686 14:34:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.686 14:34:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:19.945 fio: pid=77254, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:19.945 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53911552, buflen=4096 00:20:19.945 14:34:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.945 14:34:26 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:20.204 fio: pid=77255, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:20.204 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52383744, buflen=4096 00:20:20.204 00:20:20.204 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77254: Fri Dec 6 14:34:26 2024 00:20:20.204 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(51.4MiB/3520msec) 00:20:20.204 slat (usec): min=7, max=12795, avg=17.03, stdev=199.46 00:20:20.204 clat (usec): min=48, max=4249, avg=249.11, stdev=98.59 00:20:20.204 lat (usec): min=129, max=13002, avg=266.15, stdev=221.84 00:20:20.204 clat percentiles (usec): 00:20:20.204 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 161], 00:20:20.204 | 30.00th=[ 169], 40.00th=[ 204], 50.00th=[ 243], 60.00th=[ 314], 00:20:20.204 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 351], 00:20:20.204 | 99.00th=[ 379], 99.50th=[ 404], 99.90th=[ 519], 99.95th=[ 1713], 00:20:20.204 | 99.99th=[ 3195] 00:20:20.204 bw ( KiB/s): min=11408, max=22440, per=27.27%, avg=14870.67, stdev=4974.50, samples=6 00:20:20.204 iops : min= 2852, max= 5610, avg=3717.67, stdev=1243.63, samples=6 00:20:20.204 lat (usec) : 50=0.01%, 250=51.26%, 500=48.58%, 750=0.08%, 1000=0.01% 00:20:20.204 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01% 00:20:20.204 cpu : usr=1.19%, sys=4.15%, ctx=13197, majf=0, minf=1 00:20:20.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.204 issued rwts: total=13163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:20.205 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77255: Fri Dec 6 14:34:26 2024 00:20:20.205 read: IOPS=3366, BW=13.1MiB/s (13.8MB/s)(50.0MiB/3799msec) 00:20:20.205 slat (usec): min=9, max=16791, avg=18.31, stdev=218.87 00:20:20.205 clat (usec): min=3, max=3193, avg=277.49, stdev=81.71 00:20:20.205 lat (usec): min=133, max=17025, avg=295.80, stdev=232.64 00:20:20.205 clat percentiles (usec): 00:20:20.205 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 219], 00:20:20.205 | 30.00th=[ 262], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 318], 00:20:20.205 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 351], 00:20:20.205 | 99.00th=[ 383], 99.50th=[ 433], 99.90th=[ 668], 99.95th=[ 816], 00:20:20.205 | 99.99th=[ 3130] 00:20:20.205 bw ( KiB/s): min=11432, max=16408, per=23.58%, avg=12855.43, stdev=1734.11, samples=7 00:20:20.205 iops : min= 2858, max= 4102, avg=3213.71, stdev=433.50, samples=7 00:20:20.205 lat (usec) : 4=0.01%, 100=0.01%, 250=27.88%, 500=71.87%, 750=0.16% 00:20:20.205 lat (usec) : 1000=0.03% 00:20:20.205 lat (msec) : 2=0.02%, 4=0.02% 00:20:20.205 cpu : usr=0.95%, sys=3.71%, ctx=12820, majf=0, minf=2 00:20:20.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.205 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.205 issued rwts: total=12790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:20.205 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77256: Fri Dec 6 14:34:26 2024 00:20:20.205 read: IOPS=5307, BW=20.7MiB/s (21.7MB/s)(66.6MiB/3215msec) 00:20:20.205 slat (usec): min=10, max=12601, avg=14.47, stdev=117.97 00:20:20.205 clat (usec): min=138, max=88616, avg=172.79, stdev=681.07 00:20:20.205 lat (usec): min=151, max=88628, avg=187.27, stdev=691.27 00:20:20.205 clat percentiles (usec): 00:20:20.205 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:20:20.205 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:20:20.205 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 194], 00:20:20.205 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 537], 99.95th=[ 922], 00:20:20.205 | 99.99th=[ 7373] 00:20:20.205 bw ( KiB/s): min=19152, max=23096, per=40.19%, avg=21911.17, stdev=1520.35, samples=6 00:20:20.205 iops : min= 4788, max= 5774, avg=5477.67, stdev=380.12, samples=6 00:20:20.205 lat (usec) : 250=96.73%, 500=3.15%, 750=0.05%, 1000=0.02% 00:20:20.205 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01%, 100=0.01% 00:20:20.205 cpu : usr=1.43%, sys=5.66%, ctx=17069, majf=0, minf=2 00:20:20.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.205 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.205 issued rwts: total=17063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:20.205 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77257: Fri Dec 6 14:34:26 2024 00:20:20.205 read: IOPS=3003, BW=11.7MiB/s (12.3MB/s)(34.2MiB/2920msec) 00:20:20.205 slat (usec): min=7, max=135, avg=15.27, stdev= 5.15 00:20:20.205 clat (usec): min=160, max=5165, avg=316.17, stdev=89.05 00:20:20.205 lat (usec): min=181, max=5270, avg=331.44, stdev=88.93 00:20:20.205 clat percentiles (usec): 00:20:20.205 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 285], 00:20:20.205 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:20:20.205 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 355], 00:20:20.205 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 906], 99.95th=[ 2278], 00:20:20.205 | 99.99th=[ 5145] 00:20:20.205 bw ( KiB/s): min=11400, max=12968, per=22.02%, avg=12004.80, stdev=697.35, samples=5 00:20:20.205 iops : min= 2850, max= 3242, avg=3001.20, stdev=174.34, samples=5 00:20:20.205 lat (usec) : 250=0.74%, 500=99.12%, 750=0.02%, 1000=0.01% 00:20:20.205 lat (msec) : 2=0.02%, 4=0.05%, 10=0.02% 00:20:20.205 cpu : usr=1.10%, sys=3.80%, ctx=8771, majf=0, minf=2 00:20:20.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.205 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.205 issued rwts: total=8769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:20.205 00:20:20.205 Run status group 0 (all jobs): 00:20:20.205 READ: bw=53.2MiB/s (55.8MB/s), 11.7MiB/s-20.7MiB/s (12.3MB/s-21.7MB/s), io=202MiB (212MB), run=2920-3799msec 00:20:20.205 00:20:20.205 Disk stats (read/write): 00:20:20.205 nvme0n1: ios=12524/0, merge=0/0, ticks=3135/0, in_queue=3135, util=95.19% 00:20:20.205 nvme0n2: ios=11613/0, merge=0/0, ticks=3416/0, in_queue=3416, util=95.44% 00:20:20.205 nvme0n3: ios=17021/0, merge=0/0, ticks=2891/0, in_queue=2891, util=96.08% 00:20:20.205 nvme0n4: ios=8594/0, merge=0/0, ticks=2706/0, in_queue=2706, util=96.89% 00:20:20.205 14:34:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:20.205 14:34:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:20.463 14:34:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:20.463 14:34:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:20.721 14:34:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:20.721 14:34:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:20.979 14:34:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:20.979 14:34:27 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:21.545 14:34:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:21.545 14:34:28 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:21.545 14:34:28 -- target/fio.sh@69 -- # fio_status=0 00:20:21.545 14:34:28 -- target/fio.sh@70 -- # wait 77213 00:20:21.545 14:34:28 -- target/fio.sh@70 -- # fio_status=4 00:20:21.545 14:34:28 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:21.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:21.804 14:34:28 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:21.804 14:34:28 -- common/autotest_common.sh@1208 -- # local i=0 00:20:21.804 14:34:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:21.804 14:34:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:21.804 14:34:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:21.804 14:34:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:21.804 14:34:28 -- common/autotest_common.sh@1220 -- # return 0 00:20:21.804 14:34:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:21.804 14:34:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:21.804 nvmf hotplug test: fio failed as expected 00:20:21.804 14:34:28 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.063 14:34:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:22.063 14:34:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:22.063 14:34:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:22.063 14:34:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:22.063 14:34:28 -- target/fio.sh@91 -- # nvmftestfini 00:20:22.063 14:34:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:22.063 14:34:28 -- nvmf/common.sh@116 -- # sync 00:20:22.063 14:34:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:22.063 14:34:28 -- nvmf/common.sh@119 -- # set +e 00:20:22.063 14:34:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:22.063 14:34:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:22.063 rmmod nvme_tcp 00:20:22.063 rmmod nvme_fabrics 00:20:22.063 rmmod nvme_keyring 00:20:22.063 14:34:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:22.063 14:34:28 -- nvmf/common.sh@123 -- # set -e 00:20:22.063 14:34:28 -- nvmf/common.sh@124 -- # return 0 00:20:22.063 14:34:28 -- nvmf/common.sh@477 -- # '[' -n 76719 ']' 00:20:22.063 14:34:28 -- nvmf/common.sh@478 -- # killprocess 76719 00:20:22.063 14:34:28 -- common/autotest_common.sh@936 -- # '[' -z 76719 ']' 00:20:22.063 14:34:28 -- common/autotest_common.sh@940 -- # kill -0 76719 00:20:22.063 14:34:28 -- common/autotest_common.sh@941 -- # uname 00:20:22.063 14:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.063 14:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76719 00:20:22.063 14:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:22.063 14:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:22.063 14:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76719' 00:20:22.063 killing process with pid 76719 00:20:22.063 14:34:28 -- common/autotest_common.sh@955 -- # kill 76719 00:20:22.063 14:34:28 -- common/autotest_common.sh@960 -- # wait 76719 00:20:22.322 14:34:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:22.322 14:34:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:22.322 14:34:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:22.322 14:34:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.322 14:34:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:22.322 14:34:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.322 14:34:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.322 14:34:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.582 14:34:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:22.582 00:20:22.582 real 0m20.232s 00:20:22.582 user 1m16.641s 00:20:22.582 sys 0m9.212s 00:20:22.582 14:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:22.582 14:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 ************************************ 00:20:22.582 END TEST nvmf_fio_target 00:20:22.582 ************************************ 00:20:22.582 14:34:29 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:22.582 14:34:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:22.582 14:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.582 14:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 ************************************ 00:20:22.582 START TEST nvmf_bdevio 00:20:22.582 ************************************ 00:20:22.582 14:34:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:22.582 * Looking for test storage... 00:20:22.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:22.582 14:34:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:22.582 14:34:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:22.582 14:34:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:22.854 14:34:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:22.854 14:34:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:22.854 14:34:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:22.854 14:34:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:22.854 14:34:29 -- scripts/common.sh@335 -- # IFS=.-: 00:20:22.854 14:34:29 -- scripts/common.sh@335 -- # read -ra ver1 00:20:22.854 14:34:29 -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.854 14:34:29 -- scripts/common.sh@336 -- # read -ra ver2 00:20:22.854 14:34:29 -- scripts/common.sh@337 -- # local 'op=<' 00:20:22.854 14:34:29 -- scripts/common.sh@339 -- # ver1_l=2 00:20:22.854 14:34:29 -- scripts/common.sh@340 -- # ver2_l=1 00:20:22.854 14:34:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:22.854 14:34:29 -- scripts/common.sh@343 -- # case "$op" in 00:20:22.854 14:34:29 -- scripts/common.sh@344 -- # : 1 00:20:22.854 14:34:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:22.854 14:34:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.854 14:34:29 -- scripts/common.sh@364 -- # decimal 1 00:20:22.854 14:34:29 -- scripts/common.sh@352 -- # local d=1 00:20:22.854 14:34:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.854 14:34:29 -- scripts/common.sh@354 -- # echo 1 00:20:22.854 14:34:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:22.854 14:34:29 -- scripts/common.sh@365 -- # decimal 2 00:20:22.854 14:34:29 -- scripts/common.sh@352 -- # local d=2 00:20:22.854 14:34:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.854 14:34:29 -- scripts/common.sh@354 -- # echo 2 00:20:22.854 14:34:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:22.854 14:34:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:22.855 14:34:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:22.855 14:34:29 -- scripts/common.sh@367 -- # return 0 00:20:22.855 14:34:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.855 14:34:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.855 --rc genhtml_branch_coverage=1 00:20:22.855 --rc genhtml_function_coverage=1 00:20:22.855 --rc genhtml_legend=1 00:20:22.855 --rc geninfo_all_blocks=1 00:20:22.855 --rc geninfo_unexecuted_blocks=1 00:20:22.855 00:20:22.855 ' 00:20:22.855 14:34:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.855 --rc genhtml_branch_coverage=1 00:20:22.855 --rc genhtml_function_coverage=1 00:20:22.855 --rc genhtml_legend=1 00:20:22.855 --rc geninfo_all_blocks=1 00:20:22.855 --rc geninfo_unexecuted_blocks=1 00:20:22.855 00:20:22.855 ' 00:20:22.855 14:34:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.855 --rc genhtml_branch_coverage=1 00:20:22.855 --rc genhtml_function_coverage=1 00:20:22.855 --rc genhtml_legend=1 00:20:22.855 --rc geninfo_all_blocks=1 00:20:22.855 --rc geninfo_unexecuted_blocks=1 00:20:22.855 00:20:22.855 ' 00:20:22.855 14:34:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.855 --rc genhtml_branch_coverage=1 00:20:22.855 --rc genhtml_function_coverage=1 00:20:22.855 --rc genhtml_legend=1 00:20:22.855 --rc geninfo_all_blocks=1 00:20:22.855 --rc geninfo_unexecuted_blocks=1 00:20:22.855 00:20:22.855 ' 00:20:22.855 14:34:29 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.855 14:34:29 -- nvmf/common.sh@7 -- # uname -s 00:20:22.855 14:34:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.855 14:34:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.855 14:34:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.855 14:34:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.855 14:34:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.855 14:34:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.855 14:34:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.855 14:34:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.855 14:34:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.855 14:34:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.855 14:34:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:22.855 14:34:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:22.855 14:34:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.855 14:34:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.855 14:34:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.855 14:34:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.855 14:34:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.855 14:34:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.855 14:34:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.855 14:34:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.855 14:34:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.855 14:34:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.855 14:34:29 -- paths/export.sh@5 -- # export PATH 00:20:22.855 14:34:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.855 14:34:29 -- nvmf/common.sh@46 -- # : 0 00:20:22.855 14:34:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:22.855 14:34:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:22.855 14:34:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:22.855 14:34:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.855 14:34:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.855 14:34:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:22.855 14:34:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:22.855 14:34:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:22.855 14:34:29 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:22.855 14:34:29 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:22.855 14:34:29 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:22.855 14:34:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:22.855 14:34:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.855 14:34:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:22.855 14:34:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:22.855 14:34:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:22.855 14:34:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.855 14:34:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.855 14:34:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.855 14:34:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:22.855 14:34:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:22.855 14:34:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:22.855 14:34:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:22.855 14:34:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:22.855 14:34:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:22.855 14:34:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.855 14:34:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.855 14:34:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:22.855 14:34:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:22.855 14:34:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.855 14:34:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.855 14:34:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.855 14:34:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.855 14:34:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.855 14:34:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.855 14:34:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.855 14:34:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.855 14:34:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:22.855 14:34:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:22.855 Cannot find device "nvmf_tgt_br" 00:20:22.855 14:34:29 -- nvmf/common.sh@154 -- # true 00:20:22.855 14:34:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.855 Cannot find device "nvmf_tgt_br2" 00:20:22.855 14:34:29 -- nvmf/common.sh@155 -- # true 00:20:22.855 14:34:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:22.855 14:34:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:22.855 Cannot find device "nvmf_tgt_br" 00:20:22.855 14:34:29 -- nvmf/common.sh@157 -- # true 00:20:22.855 14:34:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:22.855 Cannot find device "nvmf_tgt_br2" 00:20:22.855 14:34:29 -- nvmf/common.sh@158 -- # true 00:20:22.855 14:34:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:22.855 14:34:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:22.855 14:34:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.855 14:34:29 -- nvmf/common.sh@161 -- # true 00:20:22.855 14:34:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.855 14:34:29 -- nvmf/common.sh@162 -- # true 00:20:22.855 14:34:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.855 14:34:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.855 14:34:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.855 14:34:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.855 14:34:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.855 14:34:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.855 14:34:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.115 14:34:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.115 14:34:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.115 14:34:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:23.115 14:34:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:23.115 14:34:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:23.115 14:34:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:23.115 14:34:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.115 14:34:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.115 14:34:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.115 14:34:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:23.115 14:34:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:23.115 14:34:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.115 14:34:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.115 14:34:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.115 14:34:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.115 14:34:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.115 14:34:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:23.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:23.115 00:20:23.115 --- 10.0.0.2 ping statistics --- 00:20:23.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.115 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:23.115 14:34:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:23.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:23.115 00:20:23.115 --- 10.0.0.3 ping statistics --- 00:20:23.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.115 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:23.115 14:34:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:23.115 00:20:23.115 --- 10.0.0.1 ping statistics --- 00:20:23.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.115 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:23.115 14:34:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.115 14:34:29 -- nvmf/common.sh@421 -- # return 0 00:20:23.115 14:34:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:23.115 14:34:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.115 14:34:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:23.115 14:34:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:23.115 14:34:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.115 14:34:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:23.115 14:34:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:23.115 14:34:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:23.115 14:34:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:23.115 14:34:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.115 14:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.115 14:34:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:23.115 14:34:29 -- nvmf/common.sh@469 -- # nvmfpid=77591 00:20:23.115 14:34:29 -- nvmf/common.sh@470 -- # waitforlisten 77591 00:20:23.115 14:34:29 -- common/autotest_common.sh@829 -- # '[' -z 77591 ']' 00:20:23.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.115 14:34:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.115 14:34:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.115 14:34:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.115 14:34:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.115 14:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.115 [2024-12-06 14:34:30.012593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:23.115 [2024-12-06 14:34:30.012731] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.375 [2024-12-06 14:34:30.153656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.375 [2024-12-06 14:34:30.319663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.375 [2024-12-06 14:34:30.320133] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.375 [2024-12-06 14:34:30.320313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.375 [2024-12-06 14:34:30.320546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.375 [2024-12-06 14:34:30.320944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:23.375 [2024-12-06 14:34:30.321041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:23.375 [2024-12-06 14:34:30.321181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:23.375 [2024-12-06 14:34:30.321191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.312 14:34:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.312 14:34:31 -- common/autotest_common.sh@862 -- # return 0 00:20:24.312 14:34:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:24.312 14:34:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.312 14:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:24.312 14:34:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.312 14:34:31 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.312 14:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.312 14:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:24.312 [2024-12-06 14:34:31.110818] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.312 14:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.312 14:34:31 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.312 14:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.312 14:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:24.312 Malloc0 00:20:24.312 14:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.312 14:34:31 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.312 14:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.312 14:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:24.312 14:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.312 14:34:31 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:24.312 14:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.312 14:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:24.312 14:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.312 14:34:31 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.312 14:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.312 14:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:24.312 [2024-12-06 14:34:31.181975] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.312 14:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.312 14:34:31 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:24.312 14:34:31 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:24.312 14:34:31 -- nvmf/common.sh@520 -- # config=() 00:20:24.312 14:34:31 -- nvmf/common.sh@520 -- # local subsystem config 00:20:24.312 14:34:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:24.312 14:34:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:24.312 { 00:20:24.312 "params": { 00:20:24.312 "name": "Nvme$subsystem", 00:20:24.312 "trtype": "$TEST_TRANSPORT", 00:20:24.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.312 "adrfam": "ipv4", 00:20:24.312 "trsvcid": "$NVMF_PORT", 00:20:24.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.312 "hdgst": ${hdgst:-false}, 00:20:24.312 "ddgst": ${ddgst:-false} 00:20:24.312 }, 00:20:24.312 "method": "bdev_nvme_attach_controller" 00:20:24.312 } 00:20:24.312 EOF 00:20:24.312 )") 00:20:24.312 14:34:31 -- nvmf/common.sh@542 -- # cat 00:20:24.312 14:34:31 -- nvmf/common.sh@544 -- # jq . 00:20:24.312 14:34:31 -- nvmf/common.sh@545 -- # IFS=, 00:20:24.312 14:34:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:24.312 "params": { 00:20:24.312 "name": "Nvme1", 00:20:24.312 "trtype": "tcp", 00:20:24.312 "traddr": "10.0.0.2", 00:20:24.312 "adrfam": "ipv4", 00:20:24.312 "trsvcid": "4420", 00:20:24.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.312 "hdgst": false, 00:20:24.312 "ddgst": false 00:20:24.312 }, 00:20:24.312 "method": "bdev_nvme_attach_controller" 00:20:24.312 }' 00:20:24.312 [2024-12-06 14:34:31.249492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.312 [2024-12-06 14:34:31.249614] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77646 ] 00:20:24.570 [2024-12-06 14:34:31.393718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.570 [2024-12-06 14:34:31.531220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.570 [2024-12-06 14:34:31.531339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.570 [2024-12-06 14:34:31.531344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.828 [2024-12-06 14:34:31.719326] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:24.828 [2024-12-06 14:34:31.719689] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:24.828 I/O targets: 00:20:24.828 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:24.828 00:20:24.828 00:20:24.828 CUnit - A unit testing framework for C - Version 2.1-3 00:20:24.828 http://cunit.sourceforge.net/ 00:20:24.828 00:20:24.828 00:20:24.828 Suite: bdevio tests on: Nvme1n1 00:20:24.828 Test: blockdev write read block ...passed 00:20:25.087 Test: blockdev write zeroes read block ...passed 00:20:25.087 Test: blockdev write zeroes read no split ...passed 00:20:25.087 Test: blockdev write zeroes read split ...passed 00:20:25.087 Test: blockdev write zeroes read split partial ...passed 00:20:25.087 Test: blockdev reset ...[2024-12-06 14:34:31.838785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:25.087 [2024-12-06 14:34:31.839193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e38910 (9): Bad file descriptor 00:20:25.087 [2024-12-06 14:34:31.858577] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:25.087 passed 00:20:25.087 Test: blockdev write read 8 blocks ...passed 00:20:25.087 Test: blockdev write read size > 128k ...passed 00:20:25.087 Test: blockdev write read invalid size ...passed 00:20:25.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:25.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:25.087 Test: blockdev write read max offset ...passed 00:20:25.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:25.087 Test: blockdev writev readv 8 blocks ...passed 00:20:25.087 Test: blockdev writev readv 30 x 1block ...passed 00:20:25.087 Test: blockdev writev readv block ...passed 00:20:25.087 Test: blockdev writev readv size > 128k ...passed 00:20:25.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:25.087 Test: blockdev comparev and writev ...[2024-12-06 14:34:32.037028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.037617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.037784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.037907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.038468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.038567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.038651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.038735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.039578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.039885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.040066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.040130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.040597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.040911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.087 [2024-12-06 14:34:32.041572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.087 [2024-12-06 14:34:32.042255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.346 passed 00:20:25.346 Test: blockdev nvme passthru rw ...passed 00:20:25.346 Test: blockdev nvme passthru vendor specific ...[2024-12-06 14:34:32.126598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.346 [2024-12-06 14:34:32.127071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.346 [2024-12-06 14:34:32.127694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.346 [2024-12-06 14:34:32.127797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.346 [2024-12-06 14:34:32.128028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.346 [2024-12-06 14:34:32.128288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.346 [2024-12-06 14:34:32.128657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.346 passed 00:20:25.346 Test: blockdev nvme admin passthru ...[2024-12-06 14:34:32.128999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.346 passed 00:20:25.346 Test: blockdev copy ...passed 00:20:25.346 00:20:25.346 Run Summary: Type Total Ran Passed Failed Inactive 00:20:25.346 suites 1 1 n/a 0 0 00:20:25.346 tests 23 23 23 0 0 00:20:25.346 asserts 152 152 152 0 n/a 00:20:25.346 00:20:25.346 Elapsed time = 0.909 seconds 00:20:25.605 14:34:32 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.605 14:34:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.605 14:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:25.605 14:34:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.605 14:34:32 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:25.605 14:34:32 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:25.605 14:34:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:25.605 14:34:32 -- nvmf/common.sh@116 -- # sync 00:20:25.605 14:34:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:25.605 14:34:32 -- nvmf/common.sh@119 -- # set +e 00:20:25.605 14:34:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:25.605 14:34:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:25.605 rmmod nvme_tcp 00:20:25.605 rmmod nvme_fabrics 00:20:25.605 rmmod nvme_keyring 00:20:25.864 14:34:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:25.864 14:34:32 -- nvmf/common.sh@123 -- # set -e 00:20:25.864 14:34:32 -- nvmf/common.sh@124 -- # return 0 00:20:25.864 14:34:32 -- nvmf/common.sh@477 -- # '[' -n 77591 ']' 00:20:25.864 14:34:32 -- nvmf/common.sh@478 -- # killprocess 77591 00:20:25.864 14:34:32 -- common/autotest_common.sh@936 -- # '[' -z 77591 ']' 00:20:25.864 14:34:32 -- common/autotest_common.sh@940 -- # kill -0 77591 00:20:25.864 14:34:32 -- common/autotest_common.sh@941 -- # uname 00:20:25.864 14:34:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.864 14:34:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77591 00:20:25.864 killing process with pid 77591 00:20:25.864 14:34:32 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:25.864 14:34:32 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:25.864 14:34:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77591' 00:20:25.864 14:34:32 -- common/autotest_common.sh@955 -- # kill 77591 00:20:25.864 14:34:32 -- common/autotest_common.sh@960 -- # wait 77591 00:20:26.123 14:34:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:26.123 14:34:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:26.123 14:34:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:26.123 14:34:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.123 14:34:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:26.123 14:34:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.123 14:34:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.123 14:34:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.123 14:34:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:26.123 00:20:26.123 real 0m3.604s 00:20:26.123 user 0m12.710s 00:20:26.123 sys 0m0.901s 00:20:26.123 14:34:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:26.123 ************************************ 00:20:26.123 14:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:26.123 END TEST nvmf_bdevio 00:20:26.123 ************************************ 00:20:26.123 14:34:33 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:20:26.123 14:34:33 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:26.123 14:34:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:26.123 14:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:26.123 14:34:33 -- common/autotest_common.sh@10 -- # set +x 00:20:26.123 ************************************ 00:20:26.123 START TEST nvmf_bdevio_no_huge 00:20:26.123 ************************************ 00:20:26.123 14:34:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:26.382 * Looking for test storage... 00:20:26.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:26.382 14:34:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:26.382 14:34:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:26.383 14:34:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:26.383 14:34:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:26.383 14:34:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:26.383 14:34:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:26.383 14:34:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:26.383 14:34:33 -- scripts/common.sh@335 -- # IFS=.-: 00:20:26.383 14:34:33 -- scripts/common.sh@335 -- # read -ra ver1 00:20:26.383 14:34:33 -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.383 14:34:33 -- scripts/common.sh@336 -- # read -ra ver2 00:20:26.383 14:34:33 -- scripts/common.sh@337 -- # local 'op=<' 00:20:26.383 14:34:33 -- scripts/common.sh@339 -- # ver1_l=2 00:20:26.383 14:34:33 -- scripts/common.sh@340 -- # ver2_l=1 00:20:26.383 14:34:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:26.383 14:34:33 -- scripts/common.sh@343 -- # case "$op" in 00:20:26.383 14:34:33 -- scripts/common.sh@344 -- # : 1 00:20:26.383 14:34:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:26.383 14:34:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.383 14:34:33 -- scripts/common.sh@364 -- # decimal 1 00:20:26.383 14:34:33 -- scripts/common.sh@352 -- # local d=1 00:20:26.383 14:34:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.383 14:34:33 -- scripts/common.sh@354 -- # echo 1 00:20:26.383 14:34:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:26.383 14:34:33 -- scripts/common.sh@365 -- # decimal 2 00:20:26.383 14:34:33 -- scripts/common.sh@352 -- # local d=2 00:20:26.383 14:34:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.383 14:34:33 -- scripts/common.sh@354 -- # echo 2 00:20:26.383 14:34:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:26.383 14:34:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:26.383 14:34:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:26.383 14:34:33 -- scripts/common.sh@367 -- # return 0 00:20:26.383 14:34:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.383 14:34:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:26.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.383 --rc genhtml_branch_coverage=1 00:20:26.383 --rc genhtml_function_coverage=1 00:20:26.383 --rc genhtml_legend=1 00:20:26.383 --rc geninfo_all_blocks=1 00:20:26.383 --rc geninfo_unexecuted_blocks=1 00:20:26.383 00:20:26.383 ' 00:20:26.383 14:34:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:26.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.383 --rc genhtml_branch_coverage=1 00:20:26.383 --rc genhtml_function_coverage=1 00:20:26.383 --rc genhtml_legend=1 00:20:26.383 --rc geninfo_all_blocks=1 00:20:26.383 --rc geninfo_unexecuted_blocks=1 00:20:26.383 00:20:26.383 ' 00:20:26.383 14:34:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:26.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.383 --rc genhtml_branch_coverage=1 00:20:26.383 --rc genhtml_function_coverage=1 00:20:26.383 --rc genhtml_legend=1 00:20:26.383 --rc geninfo_all_blocks=1 00:20:26.383 --rc geninfo_unexecuted_blocks=1 00:20:26.383 00:20:26.383 ' 00:20:26.383 14:34:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:26.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.383 --rc genhtml_branch_coverage=1 00:20:26.383 --rc genhtml_function_coverage=1 00:20:26.383 --rc genhtml_legend=1 00:20:26.383 --rc geninfo_all_blocks=1 00:20:26.383 --rc geninfo_unexecuted_blocks=1 00:20:26.383 00:20:26.383 ' 00:20:26.383 14:34:33 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.383 14:34:33 -- nvmf/common.sh@7 -- # uname -s 00:20:26.383 14:34:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.383 14:34:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.383 14:34:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.383 14:34:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.383 14:34:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.383 14:34:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.383 14:34:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.383 14:34:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.383 14:34:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.383 14:34:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.383 14:34:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:26.383 14:34:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:26.383 14:34:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.383 14:34:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.383 14:34:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.383 14:34:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.383 14:34:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.383 14:34:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.383 14:34:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.383 14:34:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.383 14:34:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.383 14:34:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.383 14:34:33 -- paths/export.sh@5 -- # export PATH 00:20:26.383 14:34:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.383 14:34:33 -- nvmf/common.sh@46 -- # : 0 00:20:26.383 14:34:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:26.383 14:34:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:26.383 14:34:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:26.383 14:34:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.383 14:34:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.383 14:34:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:26.383 14:34:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:26.383 14:34:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:26.383 14:34:33 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.383 14:34:33 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.383 14:34:33 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:26.383 14:34:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:26.383 14:34:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.383 14:34:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:26.383 14:34:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:26.383 14:34:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:26.383 14:34:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.383 14:34:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.383 14:34:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.383 14:34:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:26.383 14:34:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:26.383 14:34:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:26.383 14:34:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:26.383 14:34:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:26.383 14:34:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:26.383 14:34:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.383 14:34:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.383 14:34:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.383 14:34:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:26.383 14:34:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.383 14:34:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.383 14:34:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.383 14:34:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.383 14:34:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.383 14:34:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.383 14:34:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.383 14:34:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.383 14:34:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:26.383 14:34:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:26.383 Cannot find device "nvmf_tgt_br" 00:20:26.383 14:34:33 -- nvmf/common.sh@154 -- # true 00:20:26.383 14:34:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.383 Cannot find device "nvmf_tgt_br2" 00:20:26.383 14:34:33 -- nvmf/common.sh@155 -- # true 00:20:26.383 14:34:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:26.383 14:34:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:26.383 Cannot find device "nvmf_tgt_br" 00:20:26.383 14:34:33 -- nvmf/common.sh@157 -- # true 00:20:26.383 14:34:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:26.383 Cannot find device "nvmf_tgt_br2" 00:20:26.383 14:34:33 -- nvmf/common.sh@158 -- # true 00:20:26.383 14:34:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:26.642 14:34:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:26.642 14:34:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.642 14:34:33 -- nvmf/common.sh@161 -- # true 00:20:26.642 14:34:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.642 14:34:33 -- nvmf/common.sh@162 -- # true 00:20:26.642 14:34:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.642 14:34:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.642 14:34:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.642 14:34:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.642 14:34:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.642 14:34:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.642 14:34:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.642 14:34:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:26.642 14:34:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:26.642 14:34:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:26.642 14:34:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:26.642 14:34:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:26.642 14:34:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:26.642 14:34:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.642 14:34:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.642 14:34:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.642 14:34:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:26.642 14:34:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:26.642 14:34:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.642 14:34:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.642 14:34:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.642 14:34:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.642 14:34:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.642 14:34:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:26.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:26.642 00:20:26.642 --- 10.0.0.2 ping statistics --- 00:20:26.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.642 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:26.642 14:34:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:26.642 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.642 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:20:26.642 00:20:26.642 --- 10.0.0.3 ping statistics --- 00:20:26.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.642 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:26.642 14:34:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:26.642 00:20:26.642 --- 10.0.0.1 ping statistics --- 00:20:26.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.642 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:26.642 14:34:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.642 14:34:33 -- nvmf/common.sh@421 -- # return 0 00:20:26.642 14:34:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:26.642 14:34:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.642 14:34:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:26.642 14:34:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:26.642 14:34:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.642 14:34:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:26.642 14:34:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:26.642 14:34:33 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:26.642 14:34:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:26.642 14:34:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.642 14:34:33 -- common/autotest_common.sh@10 -- # set +x 00:20:26.901 14:34:33 -- nvmf/common.sh@469 -- # nvmfpid=77840 00:20:26.901 14:34:33 -- nvmf/common.sh@470 -- # waitforlisten 77840 00:20:26.901 14:34:33 -- common/autotest_common.sh@829 -- # '[' -z 77840 ']' 00:20:26.901 14:34:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:26.901 14:34:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.901 14:34:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.901 14:34:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.901 14:34:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.901 14:34:33 -- common/autotest_common.sh@10 -- # set +x 00:20:26.901 [2024-12-06 14:34:33.662903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:26.901 [2024-12-06 14:34:33.663018] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:26.901 [2024-12-06 14:34:33.803456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:27.160 [2024-12-06 14:34:33.923500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.160 [2024-12-06 14:34:33.923664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.160 [2024-12-06 14:34:33.923680] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.160 [2024-12-06 14:34:33.923690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.160 [2024-12-06 14:34:33.923918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:27.160 [2024-12-06 14:34:33.924213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:27.160 [2024-12-06 14:34:33.924275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:27.160 [2024-12-06 14:34:33.924278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.726 14:34:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.726 14:34:34 -- common/autotest_common.sh@862 -- # return 0 00:20:27.726 14:34:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:27.726 14:34:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.726 14:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 14:34:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.985 14:34:34 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.985 14:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 14:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 [2024-12-06 14:34:34.739317] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.985 14:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.985 14:34:34 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:27.985 14:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 14:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 Malloc0 00:20:27.985 14:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.985 14:34:34 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.985 14:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 14:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 14:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.985 14:34:34 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.985 14:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 14:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 14:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.985 14:34:34 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.985 14:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 14:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 [2024-12-06 14:34:34.796689] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.985 14:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.985 14:34:34 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:27.985 14:34:34 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:27.985 14:34:34 -- nvmf/common.sh@520 -- # config=() 00:20:27.985 14:34:34 -- nvmf/common.sh@520 -- # local subsystem config 00:20:27.985 14:34:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:27.985 14:34:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:27.985 { 00:20:27.985 "params": { 00:20:27.985 "name": "Nvme$subsystem", 00:20:27.985 "trtype": "$TEST_TRANSPORT", 00:20:27.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.985 "adrfam": "ipv4", 00:20:27.985 "trsvcid": "$NVMF_PORT", 00:20:27.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.985 "hdgst": ${hdgst:-false}, 00:20:27.985 "ddgst": ${ddgst:-false} 00:20:27.985 }, 00:20:27.985 "method": "bdev_nvme_attach_controller" 00:20:27.985 } 00:20:27.985 EOF 00:20:27.985 )") 00:20:27.985 14:34:34 -- nvmf/common.sh@542 -- # cat 00:20:27.985 14:34:34 -- nvmf/common.sh@544 -- # jq . 00:20:27.985 14:34:34 -- nvmf/common.sh@545 -- # IFS=, 00:20:27.985 14:34:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:27.985 "params": { 00:20:27.985 "name": "Nvme1", 00:20:27.985 "trtype": "tcp", 00:20:27.985 "traddr": "10.0.0.2", 00:20:27.985 "adrfam": "ipv4", 00:20:27.985 "trsvcid": "4420", 00:20:27.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.985 "hdgst": false, 00:20:27.985 "ddgst": false 00:20:27.985 }, 00:20:27.985 "method": "bdev_nvme_attach_controller" 00:20:27.985 }' 00:20:27.985 [2024-12-06 14:34:34.860201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:27.985 [2024-12-06 14:34:34.860325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77894 ] 00:20:28.244 [2024-12-06 14:34:35.008390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:28.244 [2024-12-06 14:34:35.135503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.244 [2024-12-06 14:34:35.135644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.244 [2024-12-06 14:34:35.135889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.501 [2024-12-06 14:34:35.312146] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:28.501 [2024-12-06 14:34:35.312215] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:28.501 I/O targets: 00:20:28.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:28.501 00:20:28.501 00:20:28.501 CUnit - A unit testing framework for C - Version 2.1-3 00:20:28.501 http://cunit.sourceforge.net/ 00:20:28.501 00:20:28.501 00:20:28.501 Suite: bdevio tests on: Nvme1n1 00:20:28.501 Test: blockdev write read block ...passed 00:20:28.501 Test: blockdev write zeroes read block ...passed 00:20:28.501 Test: blockdev write zeroes read no split ...passed 00:20:28.501 Test: blockdev write zeroes read split ...passed 00:20:28.501 Test: blockdev write zeroes read split partial ...passed 00:20:28.501 Test: blockdev reset ...[2024-12-06 14:34:35.439053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:28.501 [2024-12-06 14:34:35.439209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d11c0 (9): Bad file descriptor 00:20:28.501 [2024-12-06 14:34:35.450871] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:28.501 passed 00:20:28.501 Test: blockdev write read 8 blocks ...passed 00:20:28.501 Test: blockdev write read size > 128k ...passed 00:20:28.501 Test: blockdev write read invalid size ...passed 00:20:28.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:28.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:28.760 Test: blockdev write read max offset ...passed 00:20:28.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:28.760 Test: blockdev writev readv 8 blocks ...passed 00:20:28.760 Test: blockdev writev readv 30 x 1block ...passed 00:20:28.760 Test: blockdev writev readv block ...passed 00:20:28.760 Test: blockdev writev readv size > 128k ...passed 00:20:28.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:28.760 Test: blockdev comparev and writev ...[2024-12-06 14:34:35.629022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.760 [2024-12-06 14:34:35.629150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:28.760 [2024-12-06 14:34:35.629171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.760 [2024-12-06 14:34:35.629183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.760 [2024-12-06 14:34:35.629656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.760 [2024-12-06 14:34:35.629685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:28.760 [2024-12-06 14:34:35.629703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.760 [2024-12-06 14:34:35.629715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:28.760 [2024-12-06 14:34:35.630128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.760 [2024-12-06 14:34:35.630155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:28.760 [2024-12-06 14:34:35.630366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.761 [2024-12-06 14:34:35.630390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:28.761 [2024-12-06 14:34:35.630879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.761 [2024-12-06 14:34:35.630927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:28.761 [2024-12-06 14:34:35.630964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.761 [2024-12-06 14:34:35.630975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:28.761 passed 00:20:28.761 Test: blockdev nvme passthru rw ...passed 00:20:28.761 Test: blockdev nvme passthru vendor specific ...passed 00:20:28.761 Test: blockdev nvme admin passthru ...[2024-12-06 14:34:35.712796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.761 [2024-12-06 14:34:35.712829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:28.761 [2024-12-06 14:34:35.712990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.761 [2024-12-06 14:34:35.713006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:28.761 [2024-12-06 14:34:35.713166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.761 [2024-12-06 14:34:35.713181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:28.761 [2024-12-06 14:34:35.713297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.761 [2024-12-06 14:34:35.713312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:29.019 passed 00:20:29.019 Test: blockdev copy ...passed 00:20:29.019 00:20:29.019 Run Summary: Type Total Ran Passed Failed Inactive 00:20:29.019 suites 1 1 n/a 0 0 00:20:29.019 tests 23 23 23 0 0 00:20:29.019 asserts 152 152 152 0 n/a 00:20:29.019 00:20:29.019 Elapsed time = 0.908 seconds 00:20:29.277 14:34:36 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.277 14:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.277 14:34:36 -- common/autotest_common.sh@10 -- # set +x 00:20:29.277 14:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.277 14:34:36 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:29.277 14:34:36 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:29.277 14:34:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:29.277 14:34:36 -- nvmf/common.sh@116 -- # sync 00:20:29.535 14:34:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:29.535 14:34:36 -- nvmf/common.sh@119 -- # set +e 00:20:29.535 14:34:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:29.535 14:34:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:29.535 rmmod nvme_tcp 00:20:29.535 rmmod nvme_fabrics 00:20:29.535 rmmod nvme_keyring 00:20:29.535 14:34:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:29.535 14:34:36 -- nvmf/common.sh@123 -- # set -e 00:20:29.535 14:34:36 -- nvmf/common.sh@124 -- # return 0 00:20:29.535 14:34:36 -- nvmf/common.sh@477 -- # '[' -n 77840 ']' 00:20:29.535 14:34:36 -- nvmf/common.sh@478 -- # killprocess 77840 00:20:29.535 14:34:36 -- common/autotest_common.sh@936 -- # '[' -z 77840 ']' 00:20:29.535 14:34:36 -- common/autotest_common.sh@940 -- # kill -0 77840 00:20:29.535 14:34:36 -- common/autotest_common.sh@941 -- # uname 00:20:29.535 14:34:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:29.535 14:34:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77840 00:20:29.535 14:34:36 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:29.535 14:34:36 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:29.535 killing process with pid 77840 00:20:29.535 14:34:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77840' 00:20:29.535 14:34:36 -- common/autotest_common.sh@955 -- # kill 77840 00:20:29.535 14:34:36 -- common/autotest_common.sh@960 -- # wait 77840 00:20:30.099 14:34:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:30.099 14:34:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:30.099 14:34:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:30.099 14:34:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.099 14:34:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:30.099 14:34:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.099 14:34:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.099 14:34:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.099 14:34:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:30.099 00:20:30.099 real 0m3.837s 00:20:30.099 user 0m13.631s 00:20:30.099 sys 0m1.440s 00:20:30.099 14:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:30.099 14:34:36 -- common/autotest_common.sh@10 -- # set +x 00:20:30.099 ************************************ 00:20:30.099 END TEST nvmf_bdevio_no_huge 00:20:30.099 ************************************ 00:20:30.099 14:34:36 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:30.099 14:34:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:30.099 14:34:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:30.099 14:34:36 -- common/autotest_common.sh@10 -- # set +x 00:20:30.099 ************************************ 00:20:30.099 START TEST nvmf_tls 00:20:30.099 ************************************ 00:20:30.099 14:34:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:30.099 * Looking for test storage... 00:20:30.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:30.099 14:34:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:30.099 14:34:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:30.099 14:34:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:30.358 14:34:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:30.358 14:34:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:30.358 14:34:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:30.358 14:34:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:30.358 14:34:37 -- scripts/common.sh@335 -- # IFS=.-: 00:20:30.358 14:34:37 -- scripts/common.sh@335 -- # read -ra ver1 00:20:30.358 14:34:37 -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.358 14:34:37 -- scripts/common.sh@336 -- # read -ra ver2 00:20:30.358 14:34:37 -- scripts/common.sh@337 -- # local 'op=<' 00:20:30.358 14:34:37 -- scripts/common.sh@339 -- # ver1_l=2 00:20:30.358 14:34:37 -- scripts/common.sh@340 -- # ver2_l=1 00:20:30.358 14:34:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:30.358 14:34:37 -- scripts/common.sh@343 -- # case "$op" in 00:20:30.358 14:34:37 -- scripts/common.sh@344 -- # : 1 00:20:30.358 14:34:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:30.358 14:34:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.358 14:34:37 -- scripts/common.sh@364 -- # decimal 1 00:20:30.358 14:34:37 -- scripts/common.sh@352 -- # local d=1 00:20:30.358 14:34:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.358 14:34:37 -- scripts/common.sh@354 -- # echo 1 00:20:30.358 14:34:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:30.358 14:34:37 -- scripts/common.sh@365 -- # decimal 2 00:20:30.358 14:34:37 -- scripts/common.sh@352 -- # local d=2 00:20:30.358 14:34:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.358 14:34:37 -- scripts/common.sh@354 -- # echo 2 00:20:30.358 14:34:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:30.358 14:34:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:30.358 14:34:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:30.358 14:34:37 -- scripts/common.sh@367 -- # return 0 00:20:30.358 14:34:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.358 14:34:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.358 --rc genhtml_branch_coverage=1 00:20:30.358 --rc genhtml_function_coverage=1 00:20:30.358 --rc genhtml_legend=1 00:20:30.358 --rc geninfo_all_blocks=1 00:20:30.358 --rc geninfo_unexecuted_blocks=1 00:20:30.358 00:20:30.358 ' 00:20:30.358 14:34:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.358 --rc genhtml_branch_coverage=1 00:20:30.358 --rc genhtml_function_coverage=1 00:20:30.358 --rc genhtml_legend=1 00:20:30.358 --rc geninfo_all_blocks=1 00:20:30.358 --rc geninfo_unexecuted_blocks=1 00:20:30.358 00:20:30.358 ' 00:20:30.358 14:34:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.358 --rc genhtml_branch_coverage=1 00:20:30.358 --rc genhtml_function_coverage=1 00:20:30.358 --rc genhtml_legend=1 00:20:30.358 --rc geninfo_all_blocks=1 00:20:30.358 --rc geninfo_unexecuted_blocks=1 00:20:30.358 00:20:30.358 ' 00:20:30.358 14:34:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.358 --rc genhtml_branch_coverage=1 00:20:30.358 --rc genhtml_function_coverage=1 00:20:30.358 --rc genhtml_legend=1 00:20:30.358 --rc geninfo_all_blocks=1 00:20:30.358 --rc geninfo_unexecuted_blocks=1 00:20:30.358 00:20:30.358 ' 00:20:30.358 14:34:37 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:30.358 14:34:37 -- nvmf/common.sh@7 -- # uname -s 00:20:30.358 14:34:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.358 14:34:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.358 14:34:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.358 14:34:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.358 14:34:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.358 14:34:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.358 14:34:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.358 14:34:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.358 14:34:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.358 14:34:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.358 14:34:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:30.358 14:34:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:20:30.358 14:34:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.358 14:34:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.358 14:34:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:30.358 14:34:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:30.358 14:34:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.358 14:34:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.358 14:34:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.358 14:34:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.359 14:34:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.359 14:34:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.359 14:34:37 -- paths/export.sh@5 -- # export PATH 00:20:30.359 14:34:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.359 14:34:37 -- nvmf/common.sh@46 -- # : 0 00:20:30.359 14:34:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:30.359 14:34:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:30.359 14:34:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:30.359 14:34:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.359 14:34:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.359 14:34:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:30.359 14:34:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:30.359 14:34:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:30.359 14:34:37 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.359 14:34:37 -- target/tls.sh@71 -- # nvmftestinit 00:20:30.359 14:34:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:30.359 14:34:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.359 14:34:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:30.359 14:34:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:30.359 14:34:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:30.359 14:34:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.359 14:34:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.359 14:34:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.359 14:34:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:30.359 14:34:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:30.359 14:34:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:30.359 14:34:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:30.359 14:34:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:30.359 14:34:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:30.359 14:34:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.359 14:34:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.359 14:34:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:30.359 14:34:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:30.359 14:34:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:30.359 14:34:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:30.359 14:34:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:30.359 14:34:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.359 14:34:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:30.359 14:34:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:30.359 14:34:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:30.359 14:34:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:30.359 14:34:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:30.359 14:34:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:30.359 Cannot find device "nvmf_tgt_br" 00:20:30.359 14:34:37 -- nvmf/common.sh@154 -- # true 00:20:30.359 14:34:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.359 Cannot find device "nvmf_tgt_br2" 00:20:30.359 14:34:37 -- nvmf/common.sh@155 -- # true 00:20:30.359 14:34:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:30.359 14:34:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:30.359 Cannot find device "nvmf_tgt_br" 00:20:30.359 14:34:37 -- nvmf/common.sh@157 -- # true 00:20:30.359 14:34:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:30.359 Cannot find device "nvmf_tgt_br2" 00:20:30.359 14:34:37 -- nvmf/common.sh@158 -- # true 00:20:30.359 14:34:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:30.359 14:34:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:30.359 14:34:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.359 14:34:37 -- nvmf/common.sh@161 -- # true 00:20:30.359 14:34:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.359 14:34:37 -- nvmf/common.sh@162 -- # true 00:20:30.359 14:34:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.359 14:34:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.359 14:34:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.359 14:34:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.359 14:34:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.618 14:34:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.618 14:34:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.618 14:34:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:30.618 14:34:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:30.618 14:34:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:30.618 14:34:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:30.618 14:34:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:30.618 14:34:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:30.618 14:34:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.618 14:34:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.618 14:34:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.618 14:34:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:30.618 14:34:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:30.618 14:34:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.618 14:34:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.618 14:34:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.618 14:34:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.618 14:34:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.618 14:34:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:30.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:30.618 00:20:30.618 --- 10.0.0.2 ping statistics --- 00:20:30.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.618 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:30.618 14:34:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:30.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:30.618 00:20:30.618 --- 10.0.0.3 ping statistics --- 00:20:30.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.618 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:30.618 14:34:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:30.618 00:20:30.618 --- 10.0.0.1 ping statistics --- 00:20:30.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.618 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:30.618 14:34:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.618 14:34:37 -- nvmf/common.sh@421 -- # return 0 00:20:30.618 14:34:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:30.618 14:34:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.618 14:34:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:30.618 14:34:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:30.618 14:34:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.618 14:34:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:30.618 14:34:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:30.618 14:34:37 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:30.618 14:34:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:30.618 14:34:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.618 14:34:37 -- common/autotest_common.sh@10 -- # set +x 00:20:30.618 14:34:37 -- nvmf/common.sh@469 -- # nvmfpid=78090 00:20:30.618 14:34:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:30.618 14:34:37 -- nvmf/common.sh@470 -- # waitforlisten 78090 00:20:30.618 14:34:37 -- common/autotest_common.sh@829 -- # '[' -z 78090 ']' 00:20:30.618 14:34:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.618 14:34:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.618 14:34:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.618 14:34:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.618 14:34:37 -- common/autotest_common.sh@10 -- # set +x 00:20:30.618 [2024-12-06 14:34:37.565828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:30.618 [2024-12-06 14:34:37.565934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.911 [2024-12-06 14:34:37.712323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.911 [2024-12-06 14:34:37.829678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.911 [2024-12-06 14:34:37.829877] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.911 [2024-12-06 14:34:37.829894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.911 [2024-12-06 14:34:37.829906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.911 [2024-12-06 14:34:37.829963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.847 14:34:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.847 14:34:38 -- common/autotest_common.sh@862 -- # return 0 00:20:31.847 14:34:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:31.847 14:34:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.847 14:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:31.847 14:34:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.847 14:34:38 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:31.847 14:34:38 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:32.105 true 00:20:32.105 14:34:38 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.105 14:34:38 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:32.362 14:34:39 -- target/tls.sh@82 -- # version=0 00:20:32.362 14:34:39 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:32.362 14:34:39 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:32.620 14:34:39 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.620 14:34:39 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:32.879 14:34:39 -- target/tls.sh@90 -- # version=13 00:20:32.879 14:34:39 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:32.879 14:34:39 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:33.137 14:34:40 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:33.137 14:34:40 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:33.396 14:34:40 -- target/tls.sh@98 -- # version=7 00:20:33.396 14:34:40 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:33.396 14:34:40 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:33.396 14:34:40 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:33.655 14:34:40 -- target/tls.sh@105 -- # ktls=false 00:20:33.655 14:34:40 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:33.655 14:34:40 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:33.920 14:34:40 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:33.920 14:34:40 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.178 14:34:41 -- target/tls.sh@113 -- # ktls=true 00:20:34.178 14:34:41 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:34.178 14:34:41 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:34.745 14:34:41 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:34.745 14:34:41 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.004 14:34:41 -- target/tls.sh@121 -- # ktls=false 00:20:35.004 14:34:41 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:35.004 14:34:41 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:35.004 14:34:41 -- target/tls.sh@49 -- # local key hash crc 00:20:35.004 14:34:41 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:35.004 14:34:41 -- target/tls.sh@51 -- # hash=01 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # gzip -1 -c 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # tail -c8 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # head -c 4 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # crc='p$H�' 00:20:35.004 14:34:41 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:35.004 14:34:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:35.004 14:34:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:35.004 14:34:41 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:35.004 14:34:41 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:35.004 14:34:41 -- target/tls.sh@49 -- # local key hash crc 00:20:35.004 14:34:41 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:35.004 14:34:41 -- target/tls.sh@51 -- # hash=01 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # gzip -1 -c 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # tail -c8 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # head -c 4 00:20:35.004 14:34:41 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:35.004 14:34:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:35.004 14:34:41 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:35.004 14:34:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:35.004 14:34:41 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:35.004 14:34:41 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:35.004 14:34:41 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:20:35.004 14:34:41 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:35.004 14:34:41 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:35.004 14:34:41 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:35.004 14:34:41 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:20:35.004 14:34:41 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:35.262 14:34:42 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:35.521 14:34:42 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:35.521 14:34:42 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:35.521 14:34:42 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.089 [2024-12-06 14:34:42.766191] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.089 14:34:42 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:36.348 14:34:43 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:36.348 [2024-12-06 14:34:43.290301] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.348 [2024-12-06 14:34:43.290575] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.348 14:34:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.605 malloc0 00:20:36.605 14:34:43 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.864 14:34:43 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:37.122 14:34:44 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:49.326 Initializing NVMe Controllers 00:20:49.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.326 Initialization complete. Launching workers. 00:20:49.326 ======================================================== 00:20:49.326 Latency(us) 00:20:49.326 Device Information : IOPS MiB/s Average min max 00:20:49.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9910.58 38.71 6459.12 1754.36 15393.50 00:20:49.326 ======================================================== 00:20:49.326 Total : 9910.58 38.71 6459.12 1754.36 15393.50 00:20:49.326 00:20:49.326 14:34:54 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:49.326 14:34:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.326 14:34:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.326 14:34:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.326 14:34:54 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:20:49.326 14:34:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.326 14:34:54 -- target/tls.sh@28 -- # bdevperf_pid=78460 00:20:49.326 14:34:54 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.326 14:34:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.326 14:34:54 -- target/tls.sh@31 -- # waitforlisten 78460 /var/tmp/bdevperf.sock 00:20:49.326 14:34:54 -- common/autotest_common.sh@829 -- # '[' -z 78460 ']' 00:20:49.326 14:34:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.326 14:34:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.326 14:34:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.326 14:34:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.326 14:34:54 -- common/autotest_common.sh@10 -- # set +x 00:20:49.326 [2024-12-06 14:34:54.314810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:49.326 [2024-12-06 14:34:54.314940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78460 ] 00:20:49.326 [2024-12-06 14:34:54.456866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.326 [2024-12-06 14:34:54.586989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.326 14:34:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.326 14:34:55 -- common/autotest_common.sh@862 -- # return 0 00:20:49.326 14:34:55 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:20:49.326 [2024-12-06 14:34:55.584931] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.326 TLSTESTn1 00:20:49.326 14:34:55 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.326 Running I/O for 10 seconds... 00:20:59.334 00:20:59.334 Latency(us) 00:20:59.334 [2024-12-06T14:35:06.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.334 [2024-12-06T14:35:06.304Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.334 Verification LBA range: start 0x0 length 0x2000 00:20:59.334 TLSTESTn1 : 10.02 5346.89 20.89 0.00 0.00 23896.44 5481.19 23831.27 00:20:59.334 [2024-12-06T14:35:06.304Z] =================================================================================================================== 00:20:59.334 [2024-12-06T14:35:06.304Z] Total : 5346.89 20.89 0.00 0.00 23896.44 5481.19 23831.27 00:20:59.334 0 00:20:59.334 14:35:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.334 14:35:05 -- target/tls.sh@45 -- # killprocess 78460 00:20:59.334 14:35:05 -- common/autotest_common.sh@936 -- # '[' -z 78460 ']' 00:20:59.334 14:35:05 -- common/autotest_common.sh@940 -- # kill -0 78460 00:20:59.334 14:35:05 -- common/autotest_common.sh@941 -- # uname 00:20:59.334 14:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:59.334 14:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78460 00:20:59.334 14:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:59.334 14:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:59.334 killing process with pid 78460 00:20:59.334 14:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78460' 00:20:59.334 14:35:05 -- common/autotest_common.sh@955 -- # kill 78460 00:20:59.334 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.334 00:20:59.334 Latency(us) 00:20:59.334 [2024-12-06T14:35:06.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.334 [2024-12-06T14:35:06.304Z] =================================================================================================================== 00:20:59.334 [2024-12-06T14:35:06.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.334 14:35:05 -- common/autotest_common.sh@960 -- # wait 78460 00:20:59.334 14:35:06 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:20:59.334 14:35:06 -- common/autotest_common.sh@650 -- # local es=0 00:20:59.334 14:35:06 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:20:59.334 14:35:06 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:59.334 14:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.334 14:35:06 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:59.334 14:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.334 14:35:06 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:20:59.334 14:35:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.334 14:35:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.334 14:35:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:59.334 14:35:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:20:59.334 14:35:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.334 14:35:06 -- target/tls.sh@28 -- # bdevperf_pid=78616 00:20:59.334 14:35:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.334 14:35:06 -- target/tls.sh@31 -- # waitforlisten 78616 /var/tmp/bdevperf.sock 00:20:59.334 14:35:06 -- common/autotest_common.sh@829 -- # '[' -z 78616 ']' 00:20:59.334 14:35:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.334 14:35:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.334 14:35:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.334 14:35:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.334 14:35:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.334 14:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:59.334 [2024-12-06 14:35:06.207975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:59.334 [2024-12-06 14:35:06.208106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78616 ] 00:20:59.593 [2024-12-06 14:35:06.344117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.593 [2024-12-06 14:35:06.446641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.529 14:35:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.529 14:35:07 -- common/autotest_common.sh@862 -- # return 0 00:21:00.529 14:35:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:21:00.529 [2024-12-06 14:35:07.489802] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.789 [2024-12-06 14:35:07.502384] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.789 [2024-12-06 14:35:07.503192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6e3d0 (107): Transport endpoint is not connected 00:21:00.789 [2024-12-06 14:35:07.504177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6e3d0 (9): Bad file descriptor 00:21:00.789 [2024-12-06 14:35:07.505173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.789 [2024-12-06 14:35:07.505199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.789 [2024-12-06 14:35:07.505224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.789 2024/12/06 14:35:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:00.789 request: 00:21:00.789 { 00:21:00.789 "method": "bdev_nvme_attach_controller", 00:21:00.789 "params": { 00:21:00.789 "name": "TLSTEST", 00:21:00.789 "trtype": "tcp", 00:21:00.789 "traddr": "10.0.0.2", 00:21:00.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.789 "adrfam": "ipv4", 00:21:00.789 "trsvcid": "4420", 00:21:00.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.789 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:21:00.789 } 00:21:00.789 } 00:21:00.789 Got JSON-RPC error response 00:21:00.789 GoRPCClient: error on JSON-RPC call 00:21:00.789 14:35:07 -- target/tls.sh@36 -- # killprocess 78616 00:21:00.789 14:35:07 -- common/autotest_common.sh@936 -- # '[' -z 78616 ']' 00:21:00.789 14:35:07 -- common/autotest_common.sh@940 -- # kill -0 78616 00:21:00.789 14:35:07 -- common/autotest_common.sh@941 -- # uname 00:21:00.789 14:35:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.789 14:35:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78616 00:21:00.789 killing process with pid 78616 00:21:00.789 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.789 00:21:00.789 Latency(us) 00:21:00.789 [2024-12-06T14:35:07.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.789 [2024-12-06T14:35:07.759Z] =================================================================================================================== 00:21:00.789 [2024-12-06T14:35:07.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.789 14:35:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:00.789 14:35:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:00.789 14:35:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78616' 00:21:00.789 14:35:07 -- common/autotest_common.sh@955 -- # kill 78616 00:21:00.789 14:35:07 -- common/autotest_common.sh@960 -- # wait 78616 00:21:01.052 14:35:07 -- target/tls.sh@37 -- # return 1 00:21:01.052 14:35:07 -- common/autotest_common.sh@653 -- # es=1 00:21:01.052 14:35:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:01.052 14:35:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:01.052 14:35:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:01.052 14:35:07 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:01.052 14:35:07 -- common/autotest_common.sh@650 -- # local es=0 00:21:01.052 14:35:07 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:01.052 14:35:07 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:01.052 14:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.052 14:35:07 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:01.052 14:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.052 14:35:07 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:01.052 14:35:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.052 14:35:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.052 14:35:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:01.052 14:35:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:21:01.052 14:35:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.052 14:35:07 -- target/tls.sh@28 -- # bdevperf_pid=78657 00:21:01.052 14:35:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.052 14:35:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.052 14:35:07 -- target/tls.sh@31 -- # waitforlisten 78657 /var/tmp/bdevperf.sock 00:21:01.052 14:35:07 -- common/autotest_common.sh@829 -- # '[' -z 78657 ']' 00:21:01.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.052 14:35:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.052 14:35:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.052 14:35:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.052 14:35:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.052 14:35:07 -- common/autotest_common.sh@10 -- # set +x 00:21:01.052 [2024-12-06 14:35:07.885216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:01.052 [2024-12-06 14:35:07.885441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78657 ] 00:21:01.311 [2024-12-06 14:35:08.024991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.311 [2024-12-06 14:35:08.148127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.245 14:35:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.245 14:35:08 -- common/autotest_common.sh@862 -- # return 0 00:21:02.245 14:35:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:02.245 [2024-12-06 14:35:09.128929] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.245 [2024-12-06 14:35:09.134164] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:02.245 [2024-12-06 14:35:09.134219] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:02.245 [2024-12-06 14:35:09.134273] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.245 [2024-12-06 14:35:09.134897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62d3d0 (107): Transport endpoint is not connected 00:21:02.245 [2024-12-06 14:35:09.135881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62d3d0 (9): Bad file descriptor 00:21:02.245 [2024-12-06 14:35:09.136876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.245 [2024-12-06 14:35:09.136916] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:02.245 [2024-12-06 14:35:09.136926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.245 2024/12/06 14:35:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:02.245 request: 00:21:02.245 { 00:21:02.245 "method": "bdev_nvme_attach_controller", 00:21:02.245 "params": { 00:21:02.245 "name": "TLSTEST", 00:21:02.245 "trtype": "tcp", 00:21:02.245 "traddr": "10.0.0.2", 00:21:02.245 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:02.245 "adrfam": "ipv4", 00:21:02.245 "trsvcid": "4420", 00:21:02.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.245 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:21:02.245 } 00:21:02.246 } 00:21:02.246 Got JSON-RPC error response 00:21:02.246 GoRPCClient: error on JSON-RPC call 00:21:02.246 14:35:09 -- target/tls.sh@36 -- # killprocess 78657 00:21:02.246 14:35:09 -- common/autotest_common.sh@936 -- # '[' -z 78657 ']' 00:21:02.246 14:35:09 -- common/autotest_common.sh@940 -- # kill -0 78657 00:21:02.246 14:35:09 -- common/autotest_common.sh@941 -- # uname 00:21:02.246 14:35:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.246 14:35:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78657 00:21:02.246 killing process with pid 78657 00:21:02.246 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.246 00:21:02.246 Latency(us) 00:21:02.246 [2024-12-06T14:35:09.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.246 [2024-12-06T14:35:09.216Z] =================================================================================================================== 00:21:02.246 [2024-12-06T14:35:09.216Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.246 14:35:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:02.246 14:35:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:02.246 14:35:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78657' 00:21:02.246 14:35:09 -- common/autotest_common.sh@955 -- # kill 78657 00:21:02.246 14:35:09 -- common/autotest_common.sh@960 -- # wait 78657 00:21:02.504 14:35:09 -- target/tls.sh@37 -- # return 1 00:21:02.504 14:35:09 -- common/autotest_common.sh@653 -- # es=1 00:21:02.504 14:35:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.504 14:35:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.504 14:35:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.504 14:35:09 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:02.504 14:35:09 -- common/autotest_common.sh@650 -- # local es=0 00:21:02.504 14:35:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:02.504 14:35:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:02.504 14:35:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.504 14:35:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:02.504 14:35:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.504 14:35:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:02.504 14:35:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.504 14:35:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:02.504 14:35:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.504 14:35:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:21:02.504 14:35:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.504 14:35:09 -- target/tls.sh@28 -- # bdevperf_pid=78703 00:21:02.504 14:35:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.504 14:35:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.504 14:35:09 -- target/tls.sh@31 -- # waitforlisten 78703 /var/tmp/bdevperf.sock 00:21:02.504 14:35:09 -- common/autotest_common.sh@829 -- # '[' -z 78703 ']' 00:21:02.504 14:35:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.504 14:35:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.504 14:35:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.504 14:35:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.504 14:35:09 -- common/autotest_common.sh@10 -- # set +x 00:21:02.763 [2024-12-06 14:35:09.496618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:02.763 [2024-12-06 14:35:09.496730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78703 ] 00:21:02.763 [2024-12-06 14:35:09.630244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.021 [2024-12-06 14:35:09.755995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.588 14:35:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.588 14:35:10 -- common/autotest_common.sh@862 -- # return 0 00:21:03.588 14:35:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:21:03.846 [2024-12-06 14:35:10.672023] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.846 [2024-12-06 14:35:10.682284] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:03.846 [2024-12-06 14:35:10.682341] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:03.846 [2024-12-06 14:35:10.682455] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.846 [2024-12-06 14:35:10.682881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7693d0 (107): Transport endpoint is not connected 00:21:03.846 [2024-12-06 14:35:10.683870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7693d0 (9): Bad file descriptor 00:21:03.846 [2024-12-06 14:35:10.684868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:03.846 [2024-12-06 14:35:10.684919] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.846 [2024-12-06 14:35:10.684946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:03.846 2024/12/06 14:35:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:03.846 request: 00:21:03.846 { 00:21:03.846 "method": "bdev_nvme_attach_controller", 00:21:03.846 "params": { 00:21:03.846 "name": "TLSTEST", 00:21:03.846 "trtype": "tcp", 00:21:03.846 "traddr": "10.0.0.2", 00:21:03.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.846 "adrfam": "ipv4", 00:21:03.846 "trsvcid": "4420", 00:21:03.846 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.846 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:21:03.846 } 00:21:03.846 } 00:21:03.846 Got JSON-RPC error response 00:21:03.846 GoRPCClient: error on JSON-RPC call 00:21:03.846 14:35:10 -- target/tls.sh@36 -- # killprocess 78703 00:21:03.846 14:35:10 -- common/autotest_common.sh@936 -- # '[' -z 78703 ']' 00:21:03.846 14:35:10 -- common/autotest_common.sh@940 -- # kill -0 78703 00:21:03.846 14:35:10 -- common/autotest_common.sh@941 -- # uname 00:21:03.846 14:35:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.846 14:35:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78703 00:21:03.846 killing process with pid 78703 00:21:03.846 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.846 00:21:03.846 Latency(us) 00:21:03.846 [2024-12-06T14:35:10.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.846 [2024-12-06T14:35:10.816Z] =================================================================================================================== 00:21:03.846 [2024-12-06T14:35:10.816Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.846 14:35:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:03.846 14:35:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:03.846 14:35:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78703' 00:21:03.846 14:35:10 -- common/autotest_common.sh@955 -- # kill 78703 00:21:03.846 14:35:10 -- common/autotest_common.sh@960 -- # wait 78703 00:21:04.105 14:35:10 -- target/tls.sh@37 -- # return 1 00:21:04.105 14:35:10 -- common/autotest_common.sh@653 -- # es=1 00:21:04.105 14:35:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:04.105 14:35:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:04.105 14:35:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:04.105 14:35:10 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:04.105 14:35:10 -- common/autotest_common.sh@650 -- # local es=0 00:21:04.105 14:35:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:04.105 14:35:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:04.105 14:35:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.105 14:35:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:04.105 14:35:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.105 14:35:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:04.105 14:35:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.105 14:35:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.105 14:35:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:04.105 14:35:10 -- target/tls.sh@23 -- # psk= 00:21:04.105 14:35:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.105 14:35:10 -- target/tls.sh@28 -- # bdevperf_pid=78748 00:21:04.105 14:35:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.105 14:35:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.105 14:35:11 -- target/tls.sh@31 -- # waitforlisten 78748 /var/tmp/bdevperf.sock 00:21:04.105 14:35:11 -- common/autotest_common.sh@829 -- # '[' -z 78748 ']' 00:21:04.105 14:35:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.105 14:35:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.105 14:35:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.105 14:35:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.105 14:35:11 -- common/autotest_common.sh@10 -- # set +x 00:21:04.105 [2024-12-06 14:35:11.050979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:04.105 [2024-12-06 14:35:11.051134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78748 ] 00:21:04.363 [2024-12-06 14:35:11.187687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.363 [2024-12-06 14:35:11.299851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.301 14:35:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.301 14:35:12 -- common/autotest_common.sh@862 -- # return 0 00:21:05.301 14:35:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:05.560 [2024-12-06 14:35:12.342598] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:05.560 [2024-12-06 14:35:12.343838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4dc0 (9): Bad file descriptor 00:21:05.560 [2024-12-06 14:35:12.344831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.560 [2024-12-06 14:35:12.344875] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:05.560 [2024-12-06 14:35:12.344901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.560 2024/12/06 14:35:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:05.560 request: 00:21:05.560 { 00:21:05.560 "method": "bdev_nvme_attach_controller", 00:21:05.560 "params": { 00:21:05.560 "name": "TLSTEST", 00:21:05.560 "trtype": "tcp", 00:21:05.560 "traddr": "10.0.0.2", 00:21:05.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.560 "adrfam": "ipv4", 00:21:05.560 "trsvcid": "4420", 00:21:05.560 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:21:05.560 } 00:21:05.560 } 00:21:05.560 Got JSON-RPC error response 00:21:05.560 GoRPCClient: error on JSON-RPC call 00:21:05.560 14:35:12 -- target/tls.sh@36 -- # killprocess 78748 00:21:05.560 14:35:12 -- common/autotest_common.sh@936 -- # '[' -z 78748 ']' 00:21:05.560 14:35:12 -- common/autotest_common.sh@940 -- # kill -0 78748 00:21:05.560 14:35:12 -- common/autotest_common.sh@941 -- # uname 00:21:05.560 14:35:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.560 14:35:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78748 00:21:05.560 killing process with pid 78748 00:21:05.560 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.560 00:21:05.560 Latency(us) 00:21:05.560 [2024-12-06T14:35:12.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.560 [2024-12-06T14:35:12.530Z] =================================================================================================================== 00:21:05.560 [2024-12-06T14:35:12.530Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.560 14:35:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:05.560 14:35:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:05.560 14:35:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78748' 00:21:05.560 14:35:12 -- common/autotest_common.sh@955 -- # kill 78748 00:21:05.560 14:35:12 -- common/autotest_common.sh@960 -- # wait 78748 00:21:05.819 14:35:12 -- target/tls.sh@37 -- # return 1 00:21:05.819 14:35:12 -- common/autotest_common.sh@653 -- # es=1 00:21:05.819 14:35:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.819 14:35:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.819 14:35:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.819 14:35:12 -- target/tls.sh@167 -- # killprocess 78090 00:21:05.819 14:35:12 -- common/autotest_common.sh@936 -- # '[' -z 78090 ']' 00:21:05.819 14:35:12 -- common/autotest_common.sh@940 -- # kill -0 78090 00:21:05.819 14:35:12 -- common/autotest_common.sh@941 -- # uname 00:21:05.819 14:35:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.819 14:35:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78090 00:21:05.819 killing process with pid 78090 00:21:05.819 14:35:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:05.819 14:35:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:05.819 14:35:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78090' 00:21:05.819 14:35:12 -- common/autotest_common.sh@955 -- # kill 78090 00:21:05.819 14:35:12 -- common/autotest_common.sh@960 -- # wait 78090 00:21:06.078 14:35:12 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:06.078 14:35:12 -- target/tls.sh@49 -- # local key hash crc 00:21:06.078 14:35:12 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:06.078 14:35:12 -- target/tls.sh@51 -- # hash=02 00:21:06.078 14:35:12 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:06.078 14:35:12 -- target/tls.sh@52 -- # gzip -1 -c 00:21:06.078 14:35:12 -- target/tls.sh@52 -- # tail -c8 00:21:06.078 14:35:12 -- target/tls.sh@52 -- # head -c 4 00:21:06.078 14:35:12 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:06.078 14:35:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:06.078 14:35:12 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:06.078 14:35:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:06.078 14:35:12 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:06.078 14:35:12 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:06.078 14:35:12 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:06.078 14:35:12 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:06.078 14:35:12 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:06.078 14:35:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:06.078 14:35:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.078 14:35:12 -- common/autotest_common.sh@10 -- # set +x 00:21:06.078 14:35:12 -- nvmf/common.sh@469 -- # nvmfpid=78809 00:21:06.078 14:35:12 -- nvmf/common.sh@470 -- # waitforlisten 78809 00:21:06.078 14:35:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:06.078 14:35:12 -- common/autotest_common.sh@829 -- # '[' -z 78809 ']' 00:21:06.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.078 14:35:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.078 14:35:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.078 14:35:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.078 14:35:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.078 14:35:12 -- common/autotest_common.sh@10 -- # set +x 00:21:06.078 [2024-12-06 14:35:13.040434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:06.078 [2024-12-06 14:35:13.040545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.336 [2024-12-06 14:35:13.182295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.336 [2024-12-06 14:35:13.304086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:06.336 [2024-12-06 14:35:13.304236] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.336 [2024-12-06 14:35:13.304248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.336 [2024-12-06 14:35:13.304257] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.336 [2024-12-06 14:35:13.304282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.271 14:35:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.271 14:35:14 -- common/autotest_common.sh@862 -- # return 0 00:21:07.271 14:35:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:07.271 14:35:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.271 14:35:14 -- common/autotest_common.sh@10 -- # set +x 00:21:07.271 14:35:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.271 14:35:14 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:07.271 14:35:14 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:07.271 14:35:14 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.529 [2024-12-06 14:35:14.328525] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.529 14:35:14 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.787 14:35:14 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:08.046 [2024-12-06 14:35:14.840670] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.046 [2024-12-06 14:35:14.841021] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.046 14:35:14 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:08.303 malloc0 00:21:08.303 14:35:15 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:08.561 14:35:15 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:08.818 14:35:15 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:08.818 14:35:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:08.818 14:35:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:08.818 14:35:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:08.818 14:35:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:21:08.818 14:35:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.818 14:35:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.818 14:35:15 -- target/tls.sh@28 -- # bdevperf_pid=78916 00:21:08.818 14:35:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.818 14:35:15 -- target/tls.sh@31 -- # waitforlisten 78916 /var/tmp/bdevperf.sock 00:21:08.818 14:35:15 -- common/autotest_common.sh@829 -- # '[' -z 78916 ']' 00:21:08.818 14:35:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.818 14:35:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.819 14:35:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.819 14:35:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.819 14:35:15 -- common/autotest_common.sh@10 -- # set +x 00:21:08.819 [2024-12-06 14:35:15.609102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:08.819 [2024-12-06 14:35:15.609219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78916 ] 00:21:08.819 [2024-12-06 14:35:15.746035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.076 [2024-12-06 14:35:15.872347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.009 14:35:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.009 14:35:16 -- common/autotest_common.sh@862 -- # return 0 00:21:10.009 14:35:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:10.009 [2024-12-06 14:35:16.859558] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.009 TLSTESTn1 00:21:10.009 14:35:16 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:10.266 Running I/O for 10 seconds... 00:21:20.234 00:21:20.234 Latency(us) 00:21:20.234 [2024-12-06T14:35:27.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.234 [2024-12-06T14:35:27.204Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.234 Verification LBA range: start 0x0 length 0x2000 00:21:20.234 TLSTESTn1 : 10.02 5769.31 22.54 0.00 0.00 22147.99 4676.89 20018.27 00:21:20.234 [2024-12-06T14:35:27.204Z] =================================================================================================================== 00:21:20.234 [2024-12-06T14:35:27.204Z] Total : 5769.31 22.54 0.00 0.00 22147.99 4676.89 20018.27 00:21:20.234 0 00:21:20.234 14:35:27 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.234 14:35:27 -- target/tls.sh@45 -- # killprocess 78916 00:21:20.234 14:35:27 -- common/autotest_common.sh@936 -- # '[' -z 78916 ']' 00:21:20.234 14:35:27 -- common/autotest_common.sh@940 -- # kill -0 78916 00:21:20.234 14:35:27 -- common/autotest_common.sh@941 -- # uname 00:21:20.234 14:35:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:20.234 14:35:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78916 00:21:20.234 14:35:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:20.235 14:35:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:20.235 killing process with pid 78916 00:21:20.235 14:35:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78916' 00:21:20.235 14:35:27 -- common/autotest_common.sh@955 -- # kill 78916 00:21:20.235 14:35:27 -- common/autotest_common.sh@960 -- # wait 78916 00:21:20.235 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.235 00:21:20.235 Latency(us) 00:21:20.235 [2024-12-06T14:35:27.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.235 [2024-12-06T14:35:27.205Z] =================================================================================================================== 00:21:20.235 [2024-12-06T14:35:27.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.493 14:35:27 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:20.493 14:35:27 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:20.493 14:35:27 -- common/autotest_common.sh@650 -- # local es=0 00:21:20.493 14:35:27 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:20.493 14:35:27 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:20.493 14:35:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.493 14:35:27 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:20.493 14:35:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.493 14:35:27 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:20.493 14:35:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.493 14:35:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.493 14:35:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.493 14:35:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:21:20.493 14:35:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.493 14:35:27 -- target/tls.sh@28 -- # bdevperf_pid=79063 00:21:20.493 14:35:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.493 14:35:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.493 14:35:27 -- target/tls.sh@31 -- # waitforlisten 79063 /var/tmp/bdevperf.sock 00:21:20.493 14:35:27 -- common/autotest_common.sh@829 -- # '[' -z 79063 ']' 00:21:20.493 14:35:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.493 14:35:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.493 14:35:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.493 14:35:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.493 14:35:27 -- common/autotest_common.sh@10 -- # set +x 00:21:20.493 [2024-12-06 14:35:27.449780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:20.493 [2024-12-06 14:35:27.449866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79063 ] 00:21:20.752 [2024-12-06 14:35:27.588781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.752 [2024-12-06 14:35:27.715692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.687 14:35:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.687 14:35:28 -- common/autotest_common.sh@862 -- # return 0 00:21:21.687 14:35:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:21.946 [2024-12-06 14:35:28.750442] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.946 [2024-12-06 14:35:28.750500] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:21.946 2024/12/06 14:35:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:21.946 request: 00:21:21.946 { 00:21:21.946 "method": "bdev_nvme_attach_controller", 00:21:21.946 "params": { 00:21:21.946 "name": "TLSTEST", 00:21:21.946 "trtype": "tcp", 00:21:21.946 "traddr": "10.0.0.2", 00:21:21.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.946 "adrfam": "ipv4", 00:21:21.946 "trsvcid": "4420", 00:21:21.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.946 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:21:21.946 } 00:21:21.946 } 00:21:21.946 Got JSON-RPC error response 00:21:21.946 GoRPCClient: error on JSON-RPC call 00:21:21.946 14:35:28 -- target/tls.sh@36 -- # killprocess 79063 00:21:21.946 14:35:28 -- common/autotest_common.sh@936 -- # '[' -z 79063 ']' 00:21:21.946 14:35:28 -- common/autotest_common.sh@940 -- # kill -0 79063 00:21:21.946 14:35:28 -- common/autotest_common.sh@941 -- # uname 00:21:21.946 14:35:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.946 14:35:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79063 00:21:21.946 killing process with pid 79063 00:21:21.946 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.946 00:21:21.946 Latency(us) 00:21:21.946 [2024-12-06T14:35:28.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.946 [2024-12-06T14:35:28.916Z] =================================================================================================================== 00:21:21.946 [2024-12-06T14:35:28.916Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.946 14:35:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:21.946 14:35:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:21.946 14:35:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79063' 00:21:21.946 14:35:28 -- common/autotest_common.sh@955 -- # kill 79063 00:21:21.946 14:35:28 -- common/autotest_common.sh@960 -- # wait 79063 00:21:22.204 14:35:29 -- target/tls.sh@37 -- # return 1 00:21:22.204 14:35:29 -- common/autotest_common.sh@653 -- # es=1 00:21:22.204 14:35:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.204 14:35:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.204 14:35:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.204 14:35:29 -- target/tls.sh@183 -- # killprocess 78809 00:21:22.204 14:35:29 -- common/autotest_common.sh@936 -- # '[' -z 78809 ']' 00:21:22.204 14:35:29 -- common/autotest_common.sh@940 -- # kill -0 78809 00:21:22.205 14:35:29 -- common/autotest_common.sh@941 -- # uname 00:21:22.205 14:35:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.205 14:35:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78809 00:21:22.205 killing process with pid 78809 00:21:22.205 14:35:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:22.205 14:35:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:22.205 14:35:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78809' 00:21:22.205 14:35:29 -- common/autotest_common.sh@955 -- # kill 78809 00:21:22.205 14:35:29 -- common/autotest_common.sh@960 -- # wait 78809 00:21:22.462 14:35:29 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:22.462 14:35:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:22.463 14:35:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.463 14:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:22.463 14:35:29 -- nvmf/common.sh@469 -- # nvmfpid=79115 00:21:22.463 14:35:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.463 14:35:29 -- nvmf/common.sh@470 -- # waitforlisten 79115 00:21:22.463 14:35:29 -- common/autotest_common.sh@829 -- # '[' -z 79115 ']' 00:21:22.463 14:35:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.463 14:35:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.463 14:35:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.463 14:35:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.463 14:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:22.722 [2024-12-06 14:35:29.446386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:22.722 [2024-12-06 14:35:29.446493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.722 [2024-12-06 14:35:29.580500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.979 [2024-12-06 14:35:29.693864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:22.979 [2024-12-06 14:35:29.694004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.979 [2024-12-06 14:35:29.694018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.979 [2024-12-06 14:35:29.694028] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.979 [2024-12-06 14:35:29.694069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.917 14:35:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.917 14:35:30 -- common/autotest_common.sh@862 -- # return 0 00:21:23.917 14:35:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:23.917 14:35:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.917 14:35:30 -- common/autotest_common.sh@10 -- # set +x 00:21:23.917 14:35:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.917 14:35:30 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:23.917 14:35:30 -- common/autotest_common.sh@650 -- # local es=0 00:21:23.917 14:35:30 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:23.917 14:35:30 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:23.917 14:35:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.917 14:35:30 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:23.917 14:35:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.917 14:35:30 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:23.917 14:35:30 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:23.917 14:35:30 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.917 [2024-12-06 14:35:30.846332] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.917 14:35:30 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.485 14:35:31 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.743 [2024-12-06 14:35:31.470545] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.743 [2024-12-06 14:35:31.470872] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.744 14:35:31 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:25.002 malloc0 00:21:25.002 14:35:31 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:25.261 14:35:32 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:25.520 [2024-12-06 14:35:32.374889] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:25.520 [2024-12-06 14:35:32.374948] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:25.520 [2024-12-06 14:35:32.374982] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:25.520 2024/12/06 14:35:32 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:21:25.520 request: 00:21:25.520 { 00:21:25.520 "method": "nvmf_subsystem_add_host", 00:21:25.520 "params": { 00:21:25.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.520 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.520 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:21:25.520 } 00:21:25.520 } 00:21:25.520 Got JSON-RPC error response 00:21:25.520 GoRPCClient: error on JSON-RPC call 00:21:25.520 14:35:32 -- common/autotest_common.sh@653 -- # es=1 00:21:25.520 14:35:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.520 14:35:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.520 14:35:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.520 14:35:32 -- target/tls.sh@189 -- # killprocess 79115 00:21:25.520 14:35:32 -- common/autotest_common.sh@936 -- # '[' -z 79115 ']' 00:21:25.520 14:35:32 -- common/autotest_common.sh@940 -- # kill -0 79115 00:21:25.520 14:35:32 -- common/autotest_common.sh@941 -- # uname 00:21:25.520 14:35:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.520 14:35:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79115 00:21:25.520 killing process with pid 79115 00:21:25.520 14:35:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:25.520 14:35:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:25.520 14:35:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79115' 00:21:25.520 14:35:32 -- common/autotest_common.sh@955 -- # kill 79115 00:21:25.520 14:35:32 -- common/autotest_common.sh@960 -- # wait 79115 00:21:25.780 14:35:32 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:25.780 14:35:32 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:21:25.780 14:35:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:25.780 14:35:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.780 14:35:32 -- common/autotest_common.sh@10 -- # set +x 00:21:25.780 14:35:32 -- nvmf/common.sh@469 -- # nvmfpid=79237 00:21:25.780 14:35:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:25.780 14:35:32 -- nvmf/common.sh@470 -- # waitforlisten 79237 00:21:25.780 14:35:32 -- common/autotest_common.sh@829 -- # '[' -z 79237 ']' 00:21:25.780 14:35:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.780 14:35:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.780 14:35:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.780 14:35:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.780 14:35:32 -- common/autotest_common.sh@10 -- # set +x 00:21:26.039 [2024-12-06 14:35:32.764390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:26.039 [2024-12-06 14:35:32.764537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.039 [2024-12-06 14:35:32.903598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.298 [2024-12-06 14:35:33.013465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:26.298 [2024-12-06 14:35:33.013859] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.298 [2024-12-06 14:35:33.013967] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.298 [2024-12-06 14:35:33.014045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.298 [2024-12-06 14:35:33.014136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.866 14:35:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.866 14:35:33 -- common/autotest_common.sh@862 -- # return 0 00:21:26.866 14:35:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:26.866 14:35:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.866 14:35:33 -- common/autotest_common.sh@10 -- # set +x 00:21:26.866 14:35:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.866 14:35:33 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:26.866 14:35:33 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:26.866 14:35:33 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:27.125 [2024-12-06 14:35:34.063065] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.125 14:35:34 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:27.384 14:35:34 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:27.643 [2024-12-06 14:35:34.567196] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:27.643 [2024-12-06 14:35:34.567540] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.643 14:35:34 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:27.902 malloc0 00:21:27.902 14:35:34 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:28.160 14:35:35 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:28.419 14:35:35 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.419 14:35:35 -- target/tls.sh@197 -- # bdevperf_pid=79335 00:21:28.419 14:35:35 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:28.419 14:35:35 -- target/tls.sh@200 -- # waitforlisten 79335 /var/tmp/bdevperf.sock 00:21:28.419 14:35:35 -- common/autotest_common.sh@829 -- # '[' -z 79335 ']' 00:21:28.419 14:35:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.419 14:35:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.419 14:35:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.419 14:35:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.419 14:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:28.419 [2024-12-06 14:35:35.341478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:28.419 [2024-12-06 14:35:35.341586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79335 ] 00:21:28.676 [2024-12-06 14:35:35.476062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.677 [2024-12-06 14:35:35.607869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.671 14:35:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.671 14:35:36 -- common/autotest_common.sh@862 -- # return 0 00:21:29.671 14:35:36 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:29.671 [2024-12-06 14:35:36.576934] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.929 TLSTESTn1 00:21:29.929 14:35:36 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:30.188 14:35:36 -- target/tls.sh@205 -- # tgtconf='{ 00:21:30.188 "subsystems": [ 00:21:30.188 { 00:21:30.188 "subsystem": "iobuf", 00:21:30.188 "config": [ 00:21:30.188 { 00:21:30.188 "method": "iobuf_set_options", 00:21:30.188 "params": { 00:21:30.188 "large_bufsize": 135168, 00:21:30.188 "large_pool_count": 1024, 00:21:30.188 "small_bufsize": 8192, 00:21:30.188 "small_pool_count": 8192 00:21:30.188 } 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "sock", 00:21:30.188 "config": [ 00:21:30.188 { 00:21:30.188 "method": "sock_impl_set_options", 00:21:30.188 "params": { 00:21:30.188 "enable_ktls": false, 00:21:30.188 "enable_placement_id": 0, 00:21:30.188 "enable_quickack": false, 00:21:30.188 "enable_recv_pipe": true, 00:21:30.188 "enable_zerocopy_send_client": false, 00:21:30.188 "enable_zerocopy_send_server": true, 00:21:30.188 "impl_name": "posix", 00:21:30.188 "recv_buf_size": 2097152, 00:21:30.188 "send_buf_size": 2097152, 00:21:30.188 "tls_version": 0, 00:21:30.188 "zerocopy_threshold": 0 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "sock_impl_set_options", 00:21:30.188 "params": { 00:21:30.188 "enable_ktls": false, 00:21:30.188 "enable_placement_id": 0, 00:21:30.188 "enable_quickack": false, 00:21:30.188 "enable_recv_pipe": true, 00:21:30.188 "enable_zerocopy_send_client": false, 00:21:30.188 "enable_zerocopy_send_server": true, 00:21:30.188 "impl_name": "ssl", 00:21:30.188 "recv_buf_size": 4096, 00:21:30.188 "send_buf_size": 4096, 00:21:30.188 "tls_version": 0, 00:21:30.188 "zerocopy_threshold": 0 00:21:30.188 } 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "vmd", 00:21:30.188 "config": [] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "accel", 00:21:30.188 "config": [ 00:21:30.188 { 00:21:30.188 "method": "accel_set_options", 00:21:30.188 "params": { 00:21:30.188 "buf_count": 2048, 00:21:30.188 "large_cache_size": 16, 00:21:30.188 "sequence_count": 2048, 00:21:30.188 "small_cache_size": 128, 00:21:30.188 "task_count": 2048 00:21:30.188 } 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "bdev", 00:21:30.188 "config": [ 00:21:30.188 { 00:21:30.188 "method": "bdev_set_options", 00:21:30.188 "params": { 00:21:30.188 "bdev_auto_examine": true, 00:21:30.188 "bdev_io_cache_size": 256, 00:21:30.188 "bdev_io_pool_size": 65535, 00:21:30.188 "iobuf_large_cache_size": 16, 00:21:30.188 "iobuf_small_cache_size": 128 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "bdev_raid_set_options", 00:21:30.188 "params": { 00:21:30.188 "process_window_size_kb": 1024 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "bdev_iscsi_set_options", 00:21:30.188 "params": { 00:21:30.188 "timeout_sec": 30 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "bdev_nvme_set_options", 00:21:30.188 "params": { 00:21:30.188 "action_on_timeout": "none", 00:21:30.188 "allow_accel_sequence": false, 00:21:30.188 "arbitration_burst": 0, 00:21:30.188 "bdev_retry_count": 3, 00:21:30.188 "ctrlr_loss_timeout_sec": 0, 00:21:30.188 "delay_cmd_submit": true, 00:21:30.188 "fast_io_fail_timeout_sec": 0, 00:21:30.188 "generate_uuids": false, 00:21:30.188 "high_priority_weight": 0, 00:21:30.188 "io_path_stat": false, 00:21:30.188 "io_queue_requests": 0, 00:21:30.188 "keep_alive_timeout_ms": 10000, 00:21:30.188 "low_priority_weight": 0, 00:21:30.188 "medium_priority_weight": 0, 00:21:30.188 "nvme_adminq_poll_period_us": 10000, 00:21:30.188 "nvme_ioq_poll_period_us": 0, 00:21:30.188 "reconnect_delay_sec": 0, 00:21:30.188 "timeout_admin_us": 0, 00:21:30.188 "timeout_us": 0, 00:21:30.188 "transport_ack_timeout": 0, 00:21:30.188 "transport_retry_count": 4, 00:21:30.188 "transport_tos": 0 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "bdev_nvme_set_hotplug", 00:21:30.188 "params": { 00:21:30.188 "enable": false, 00:21:30.188 "period_us": 100000 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "bdev_malloc_create", 00:21:30.188 "params": { 00:21:30.188 "block_size": 4096, 00:21:30.188 "name": "malloc0", 00:21:30.188 "num_blocks": 8192, 00:21:30.188 "optimal_io_boundary": 0, 00:21:30.188 "physical_block_size": 4096, 00:21:30.188 "uuid": "d8c503c8-f8a9-4fe2-82dd-fd9e70ee7fcb" 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "bdev_wait_for_examine" 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "nbd", 00:21:30.188 "config": [] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "scheduler", 00:21:30.188 "config": [ 00:21:30.188 { 00:21:30.188 "method": "framework_set_scheduler", 00:21:30.188 "params": { 00:21:30.188 "name": "static" 00:21:30.188 } 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "subsystem": "nvmf", 00:21:30.188 "config": [ 00:21:30.188 { 00:21:30.188 "method": "nvmf_set_config", 00:21:30.188 "params": { 00:21:30.188 "admin_cmd_passthru": { 00:21:30.188 "identify_ctrlr": false 00:21:30.188 }, 00:21:30.188 "discovery_filter": "match_any" 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_set_max_subsystems", 00:21:30.188 "params": { 00:21:30.188 "max_subsystems": 1024 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_set_crdt", 00:21:30.188 "params": { 00:21:30.188 "crdt1": 0, 00:21:30.188 "crdt2": 0, 00:21:30.188 "crdt3": 0 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_create_transport", 00:21:30.188 "params": { 00:21:30.188 "abort_timeout_sec": 1, 00:21:30.188 "buf_cache_size": 4294967295, 00:21:30.188 "c2h_success": false, 00:21:30.188 "dif_insert_or_strip": false, 00:21:30.188 "in_capsule_data_size": 4096, 00:21:30.188 "io_unit_size": 131072, 00:21:30.188 "max_aq_depth": 128, 00:21:30.188 "max_io_qpairs_per_ctrlr": 127, 00:21:30.188 "max_io_size": 131072, 00:21:30.188 "max_queue_depth": 128, 00:21:30.188 "num_shared_buffers": 511, 00:21:30.188 "sock_priority": 0, 00:21:30.188 "trtype": "TCP", 00:21:30.188 "zcopy": false 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_create_subsystem", 00:21:30.188 "params": { 00:21:30.188 "allow_any_host": false, 00:21:30.188 "ana_reporting": false, 00:21:30.188 "max_cntlid": 65519, 00:21:30.188 "max_namespaces": 10, 00:21:30.188 "min_cntlid": 1, 00:21:30.188 "model_number": "SPDK bdev Controller", 00:21:30.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.188 "serial_number": "SPDK00000000000001" 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_subsystem_add_host", 00:21:30.188 "params": { 00:21:30.188 "host": "nqn.2016-06.io.spdk:host1", 00:21:30.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.188 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_subsystem_add_ns", 00:21:30.188 "params": { 00:21:30.188 "namespace": { 00:21:30.188 "bdev_name": "malloc0", 00:21:30.188 "nguid": "D8C503C8F8A94FE282DDFD9E70EE7FCB", 00:21:30.188 "nsid": 1, 00:21:30.188 "uuid": "d8c503c8-f8a9-4fe2-82dd-fd9e70ee7fcb" 00:21:30.188 }, 00:21:30.188 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:30.188 } 00:21:30.188 }, 00:21:30.188 { 00:21:30.188 "method": "nvmf_subsystem_add_listener", 00:21:30.188 "params": { 00:21:30.188 "listen_address": { 00:21:30.188 "adrfam": "IPv4", 00:21:30.188 "traddr": "10.0.0.2", 00:21:30.188 "trsvcid": "4420", 00:21:30.188 "trtype": "TCP" 00:21:30.188 }, 00:21:30.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.188 "secure_channel": true 00:21:30.188 } 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 } 00:21:30.188 ] 00:21:30.188 }' 00:21:30.188 14:35:36 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:30.447 14:35:37 -- target/tls.sh@206 -- # bdevperfconf='{ 00:21:30.447 "subsystems": [ 00:21:30.447 { 00:21:30.447 "subsystem": "iobuf", 00:21:30.447 "config": [ 00:21:30.447 { 00:21:30.447 "method": "iobuf_set_options", 00:21:30.447 "params": { 00:21:30.447 "large_bufsize": 135168, 00:21:30.447 "large_pool_count": 1024, 00:21:30.447 "small_bufsize": 8192, 00:21:30.447 "small_pool_count": 8192 00:21:30.447 } 00:21:30.447 } 00:21:30.447 ] 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "subsystem": "sock", 00:21:30.447 "config": [ 00:21:30.447 { 00:21:30.447 "method": "sock_impl_set_options", 00:21:30.447 "params": { 00:21:30.447 "enable_ktls": false, 00:21:30.447 "enable_placement_id": 0, 00:21:30.447 "enable_quickack": false, 00:21:30.447 "enable_recv_pipe": true, 00:21:30.447 "enable_zerocopy_send_client": false, 00:21:30.447 "enable_zerocopy_send_server": true, 00:21:30.447 "impl_name": "posix", 00:21:30.447 "recv_buf_size": 2097152, 00:21:30.447 "send_buf_size": 2097152, 00:21:30.447 "tls_version": 0, 00:21:30.447 "zerocopy_threshold": 0 00:21:30.447 } 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "method": "sock_impl_set_options", 00:21:30.447 "params": { 00:21:30.447 "enable_ktls": false, 00:21:30.447 "enable_placement_id": 0, 00:21:30.447 "enable_quickack": false, 00:21:30.447 "enable_recv_pipe": true, 00:21:30.447 "enable_zerocopy_send_client": false, 00:21:30.447 "enable_zerocopy_send_server": true, 00:21:30.447 "impl_name": "ssl", 00:21:30.447 "recv_buf_size": 4096, 00:21:30.447 "send_buf_size": 4096, 00:21:30.447 "tls_version": 0, 00:21:30.447 "zerocopy_threshold": 0 00:21:30.447 } 00:21:30.447 } 00:21:30.447 ] 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "subsystem": "vmd", 00:21:30.447 "config": [] 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "subsystem": "accel", 00:21:30.447 "config": [ 00:21:30.447 { 00:21:30.447 "method": "accel_set_options", 00:21:30.447 "params": { 00:21:30.447 "buf_count": 2048, 00:21:30.447 "large_cache_size": 16, 00:21:30.447 "sequence_count": 2048, 00:21:30.447 "small_cache_size": 128, 00:21:30.447 "task_count": 2048 00:21:30.447 } 00:21:30.447 } 00:21:30.447 ] 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "subsystem": "bdev", 00:21:30.447 "config": [ 00:21:30.447 { 00:21:30.447 "method": "bdev_set_options", 00:21:30.447 "params": { 00:21:30.447 "bdev_auto_examine": true, 00:21:30.447 "bdev_io_cache_size": 256, 00:21:30.447 "bdev_io_pool_size": 65535, 00:21:30.447 "iobuf_large_cache_size": 16, 00:21:30.447 "iobuf_small_cache_size": 128 00:21:30.447 } 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "method": "bdev_raid_set_options", 00:21:30.447 "params": { 00:21:30.447 "process_window_size_kb": 1024 00:21:30.447 } 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "method": "bdev_iscsi_set_options", 00:21:30.447 "params": { 00:21:30.447 "timeout_sec": 30 00:21:30.447 } 00:21:30.447 }, 00:21:30.447 { 00:21:30.447 "method": "bdev_nvme_set_options", 00:21:30.447 "params": { 00:21:30.447 "action_on_timeout": "none", 00:21:30.447 "allow_accel_sequence": false, 00:21:30.447 "arbitration_burst": 0, 00:21:30.447 "bdev_retry_count": 3, 00:21:30.447 "ctrlr_loss_timeout_sec": 0, 00:21:30.447 "delay_cmd_submit": true, 00:21:30.448 "fast_io_fail_timeout_sec": 0, 00:21:30.448 "generate_uuids": false, 00:21:30.448 "high_priority_weight": 0, 00:21:30.448 "io_path_stat": false, 00:21:30.448 "io_queue_requests": 512, 00:21:30.448 "keep_alive_timeout_ms": 10000, 00:21:30.448 "low_priority_weight": 0, 00:21:30.448 "medium_priority_weight": 0, 00:21:30.448 "nvme_adminq_poll_period_us": 10000, 00:21:30.448 "nvme_ioq_poll_period_us": 0, 00:21:30.448 "reconnect_delay_sec": 0, 00:21:30.448 "timeout_admin_us": 0, 00:21:30.448 "timeout_us": 0, 00:21:30.448 "transport_ack_timeout": 0, 00:21:30.448 "transport_retry_count": 4, 00:21:30.448 "transport_tos": 0 00:21:30.448 } 00:21:30.448 }, 00:21:30.448 { 00:21:30.448 "method": "bdev_nvme_attach_controller", 00:21:30.448 "params": { 00:21:30.448 "adrfam": "IPv4", 00:21:30.448 "ctrlr_loss_timeout_sec": 0, 00:21:30.448 "ddgst": false, 00:21:30.448 "fast_io_fail_timeout_sec": 0, 00:21:30.448 "hdgst": false, 00:21:30.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.448 "name": "TLSTEST", 00:21:30.448 "prchk_guard": false, 00:21:30.448 "prchk_reftag": false, 00:21:30.448 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:21:30.448 "reconnect_delay_sec": 0, 00:21:30.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.448 "traddr": "10.0.0.2", 00:21:30.448 "trsvcid": "4420", 00:21:30.448 "trtype": "TCP" 00:21:30.448 } 00:21:30.448 }, 00:21:30.448 { 00:21:30.448 "method": "bdev_nvme_set_hotplug", 00:21:30.448 "params": { 00:21:30.448 "enable": false, 00:21:30.448 "period_us": 100000 00:21:30.448 } 00:21:30.448 }, 00:21:30.448 { 00:21:30.448 "method": "bdev_wait_for_examine" 00:21:30.448 } 00:21:30.448 ] 00:21:30.448 }, 00:21:30.448 { 00:21:30.448 "subsystem": "nbd", 00:21:30.448 "config": [] 00:21:30.448 } 00:21:30.448 ] 00:21:30.448 }' 00:21:30.448 14:35:37 -- target/tls.sh@208 -- # killprocess 79335 00:21:30.448 14:35:37 -- common/autotest_common.sh@936 -- # '[' -z 79335 ']' 00:21:30.448 14:35:37 -- common/autotest_common.sh@940 -- # kill -0 79335 00:21:30.448 14:35:37 -- common/autotest_common.sh@941 -- # uname 00:21:30.448 14:35:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.448 14:35:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79335 00:21:30.448 killing process with pid 79335 00:21:30.448 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.448 00:21:30.448 Latency(us) 00:21:30.448 [2024-12-06T14:35:37.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.448 [2024-12-06T14:35:37.418Z] =================================================================================================================== 00:21:30.448 [2024-12-06T14:35:37.418Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:30.448 14:35:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:30.448 14:35:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:30.448 14:35:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79335' 00:21:30.448 14:35:37 -- common/autotest_common.sh@955 -- # kill 79335 00:21:30.448 14:35:37 -- common/autotest_common.sh@960 -- # wait 79335 00:21:30.707 14:35:37 -- target/tls.sh@209 -- # killprocess 79237 00:21:30.707 14:35:37 -- common/autotest_common.sh@936 -- # '[' -z 79237 ']' 00:21:30.707 14:35:37 -- common/autotest_common.sh@940 -- # kill -0 79237 00:21:30.707 14:35:37 -- common/autotest_common.sh@941 -- # uname 00:21:30.707 14:35:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.707 14:35:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79237 00:21:30.707 killing process with pid 79237 00:21:30.707 14:35:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:30.707 14:35:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:30.707 14:35:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79237' 00:21:30.707 14:35:37 -- common/autotest_common.sh@955 -- # kill 79237 00:21:30.707 14:35:37 -- common/autotest_common.sh@960 -- # wait 79237 00:21:30.982 14:35:37 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:30.982 14:35:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:30.982 14:35:37 -- target/tls.sh@212 -- # echo '{ 00:21:30.982 "subsystems": [ 00:21:30.982 { 00:21:30.982 "subsystem": "iobuf", 00:21:30.982 "config": [ 00:21:30.982 { 00:21:30.982 "method": "iobuf_set_options", 00:21:30.982 "params": { 00:21:30.982 "large_bufsize": 135168, 00:21:30.982 "large_pool_count": 1024, 00:21:30.982 "small_bufsize": 8192, 00:21:30.982 "small_pool_count": 8192 00:21:30.982 } 00:21:30.982 } 00:21:30.982 ] 00:21:30.982 }, 00:21:30.982 { 00:21:30.982 "subsystem": "sock", 00:21:30.982 "config": [ 00:21:30.982 { 00:21:30.982 "method": "sock_impl_set_options", 00:21:30.982 "params": { 00:21:30.982 "enable_ktls": false, 00:21:30.982 "enable_placement_id": 0, 00:21:30.982 "enable_quickack": false, 00:21:30.982 "enable_recv_pipe": true, 00:21:30.982 "enable_zerocopy_send_client": false, 00:21:30.982 "enable_zerocopy_send_server": true, 00:21:30.982 "impl_name": "posix", 00:21:30.982 "recv_buf_size": 2097152, 00:21:30.982 "send_buf_size": 2097152, 00:21:30.982 "tls_version": 0, 00:21:30.982 "zerocopy_threshold": 0 00:21:30.982 } 00:21:30.982 }, 00:21:30.982 { 00:21:30.982 "method": "sock_impl_set_options", 00:21:30.982 "params": { 00:21:30.982 "enable_ktls": false, 00:21:30.982 "enable_placement_id": 0, 00:21:30.982 "enable_quickack": false, 00:21:30.982 "enable_recv_pipe": true, 00:21:30.982 "enable_zerocopy_send_client": false, 00:21:30.982 "enable_zerocopy_send_server": true, 00:21:30.982 "impl_name": "ssl", 00:21:30.982 "recv_buf_size": 4096, 00:21:30.982 "send_buf_size": 4096, 00:21:30.982 "tls_version": 0, 00:21:30.982 "zerocopy_threshold": 0 00:21:30.982 } 00:21:30.982 } 00:21:30.982 ] 00:21:30.982 }, 00:21:30.982 { 00:21:30.982 "subsystem": "vmd", 00:21:30.982 "config": [] 00:21:30.982 }, 00:21:30.982 { 00:21:30.982 "subsystem": "accel", 00:21:30.982 "config": [ 00:21:30.982 { 00:21:30.982 "method": "accel_set_options", 00:21:30.982 "params": { 00:21:30.982 "buf_count": 2048, 00:21:30.982 "large_cache_size": 16, 00:21:30.982 "sequence_count": 2048, 00:21:30.982 "small_cache_size": 128, 00:21:30.982 "task_count": 2048 00:21:30.982 } 00:21:30.982 } 00:21:30.982 ] 00:21:30.982 }, 00:21:30.982 { 00:21:30.982 "subsystem": "bdev", 00:21:30.982 "config": [ 00:21:30.982 { 00:21:30.983 "method": "bdev_set_options", 00:21:30.983 "params": { 00:21:30.983 "bdev_auto_examine": true, 00:21:30.983 "bdev_io_cache_size": 256, 00:21:30.983 "bdev_io_pool_size": 65535, 00:21:30.983 "iobuf_large_cache_size": 16, 00:21:30.983 "iobuf_small_cache_size": 128 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "bdev_raid_set_options", 00:21:30.983 "params": { 00:21:30.983 "process_window_size_kb": 1024 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "bdev_iscsi_set_options", 00:21:30.983 "params": { 00:21:30.983 "timeout_sec": 30 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "bdev_nvme_set_options", 00:21:30.983 "params": { 00:21:30.983 "action_on_timeout": "none", 00:21:30.983 "allow_accel_sequence": false, 00:21:30.983 "arbitration_burst": 0, 00:21:30.983 "bdev_retry_count": 3, 00:21:30.983 "ctrlr_loss_timeout_sec": 0, 00:21:30.983 "delay_cmd_submit": true, 00:21:30.983 "fast_io_fail_timeout_sec": 0, 00:21:30.983 "generate_uuids": false, 00:21:30.983 "high_priority_weight": 0, 00:21:30.983 "io_path_stat": false, 00:21:30.983 "io_queue_requests": 0, 00:21:30.983 "keep_alive_timeout_ms": 10000, 00:21:30.983 "low_priority_weight": 0, 00:21:30.983 "medium_priority_weight": 0, 00:21:30.983 "nvme_adminq_poll_period_us": 10000, 00:21:30.983 "nvme_ioq_poll_period_us": 0, 00:21:30.983 "reconnect_delay_sec": 0, 00:21:30.983 "timeout_admin_us": 0, 00:21:30.983 "timeout_us": 0, 00:21:30.983 "transport_ack_timeout": 0, 00:21:30.983 "transport_retry_count": 4, 00:21:30.983 "transport_tos": 0 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "bdev_nvme_set_hotplug", 00:21:30.983 "params": { 00:21:30.983 "enable": false, 00:21:30.983 "period_us": 100000 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "bdev_malloc_create", 00:21:30.983 "params": { 00:21:30.983 "block_size": 4096, 00:21:30.983 "name": "malloc0", 00:21:30.983 "num_blocks": 8192, 00:21:30.983 "optimal_io_boundary": 0, 00:21:30.983 "physical_block_size": 4096, 00:21:30.983 "uuid": "d8c503c8-f8a9-4fe2-82dd-fd9e70ee7fcb" 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "bdev_wait_for_examine" 00:21:30.983 } 00:21:30.983 ] 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "subsystem": "nbd", 00:21:30.983 "config": [] 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "subsystem": "scheduler", 00:21:30.983 "config": [ 00:21:30.983 { 00:21:30.983 "method": "framework_set_scheduler", 00:21:30.983 "params": { 00:21:30.983 "name": "static" 00:21:30.983 } 00:21:30.983 } 00:21:30.983 ] 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "subsystem": "nvmf", 00:21:30.983 "config": [ 00:21:30.983 { 00:21:30.983 "method": "nvmf_set_config", 00:21:30.983 "params": { 00:21:30.983 "admin_cmd_passthru": { 00:21:30.983 "identify_ctrlr": false 00:21:30.983 }, 00:21:30.983 "discovery_filter": "match_any" 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "nvmf_set_max_subsystems", 00:21:30.983 "params": { 00:21:30.983 "max_subsystems": 1024 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "nvmf_set_crdt", 00:21:30.983 "params": { 00:21:30.983 "crdt1": 0, 00:21:30.983 "crdt2": 0, 00:21:30.983 "crdt3": 0 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "nvmf_create_transport", 00:21:30.983 "params": { 00:21:30.983 "abort_timeout_sec": 1, 00:21:30.983 "buf_cache_size": 4294967295, 00:21:30.983 "c2h_success": false, 00:21:30.983 "dif_insert_or_strip": false, 00:21:30.983 "in_capsule_data_size": 4096, 00:21:30.983 "io_unit_size": 131072, 00:21:30.983 "max_aq_depth": 128, 00:21:30.983 "max_io_qpairs_per_ctrlr": 127, 00:21:30.983 "max_io_size": 131072, 00:21:30.983 "max_queue_depth": 128, 00:21:30.983 "num_shared_buffers": 511, 00:21:30.983 "sock_priority": 0, 00:21:30.983 "trtype": "TCP", 00:21:30.983 "zcopy": false 00:21:30.983 } 00:21:30.983 }, 00:21:30.983 { 00:21:30.983 "method": "nvmf_create_subsystem", 00:21:30.983 "params": { 00:21:30.983 "allow_any_host": false, 00:21:30.983 "ana_reporting": false, 00:21:30.983 "max_cntlid": 65519, 00:21:30.983 "max_namespaces": 10, 00:21:30.983 "min_cntlid": 1, 00:21:30.983 "model_number": "SPDK bdev Controller", 00:21:30.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.983 "serial_number": "SPDK00000000000001" 00:21:30.983 } 00:21:30.984 }, 00:21:30.984 { 00:21:30.984 "method": "nvmf_subsystem_add_host", 00:21:30.984 "params": { 00:21:30.984 "host": "nqn.2016-06.io.spdk:host1", 00:21:30.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.984 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:21:30.984 } 00:21:30.984 }, 00:21:30.984 { 00:21:30.984 "method": "nvmf_subsystem_add_ns", 00:21:30.984 "params": { 00:21:30.984 "namespace": { 00:21:30.984 "bdev_name": "malloc0", 00:21:30.984 "nguid": "D8C503C8F8A94FE282DDFD9E70EE7FCB", 00:21:30.984 "nsid": 1, 00:21:30.984 "uuid": "d8c503c8-f8a9-4fe2-82dd-fd9e70ee7fcb" 00:21:30.984 }, 00:21:30.984 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:30.984 } 00:21:30.984 }, 00:21:30.984 { 00:21:30.984 "method": "nvmf_subsystem_add_listener", 00:21:30.984 "params": { 00:21:30.984 "listen_address": { 00:21:30.984 "adrfam": "IPv4", 00:21:30.984 "traddr": "10.0.0.2", 00:21:30.984 "trsvcid": "4420", 00:21:30.984 "trtype": "TCP" 00:21:30.984 }, 00:21:30.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.984 "secure_channel": true 00:21:30.984 } 00:21:30.984 } 00:21:30.984 ] 00:21:30.984 } 00:21:30.984 ] 00:21:30.984 }' 00:21:30.984 14:35:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.984 14:35:37 -- common/autotest_common.sh@10 -- # set +x 00:21:30.984 14:35:37 -- nvmf/common.sh@469 -- # nvmfpid=79414 00:21:30.984 14:35:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:30.984 14:35:37 -- nvmf/common.sh@470 -- # waitforlisten 79414 00:21:30.984 14:35:37 -- common/autotest_common.sh@829 -- # '[' -z 79414 ']' 00:21:30.984 14:35:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.984 14:35:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.984 14:35:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.984 14:35:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.984 14:35:37 -- common/autotest_common.sh@10 -- # set +x 00:21:31.242 [2024-12-06 14:35:37.987323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:31.242 [2024-12-06 14:35:37.987652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.242 [2024-12-06 14:35:38.126953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.501 [2024-12-06 14:35:38.235335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:31.501 [2024-12-06 14:35:38.235529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.501 [2024-12-06 14:35:38.235544] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.501 [2024-12-06 14:35:38.235552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.501 [2024-12-06 14:35:38.235579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.502 [2024-12-06 14:35:38.460136] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.759 [2024-12-06 14:35:38.492069] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.759 [2024-12-06 14:35:38.492264] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.017 14:35:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.017 14:35:38 -- common/autotest_common.sh@862 -- # return 0 00:21:32.017 14:35:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:32.017 14:35:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.017 14:35:38 -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 14:35:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.275 14:35:39 -- target/tls.sh@216 -- # bdevperf_pid=79458 00:21:32.275 14:35:39 -- target/tls.sh@217 -- # waitforlisten 79458 /var/tmp/bdevperf.sock 00:21:32.275 14:35:39 -- common/autotest_common.sh@829 -- # '[' -z 79458 ']' 00:21:32.275 14:35:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.275 14:35:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.275 14:35:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.275 14:35:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.275 14:35:39 -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 14:35:39 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:32.275 14:35:39 -- target/tls.sh@213 -- # echo '{ 00:21:32.275 "subsystems": [ 00:21:32.275 { 00:21:32.275 "subsystem": "iobuf", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "iobuf_set_options", 00:21:32.275 "params": { 00:21:32.275 "large_bufsize": 135168, 00:21:32.275 "large_pool_count": 1024, 00:21:32.275 "small_bufsize": 8192, 00:21:32.275 "small_pool_count": 8192 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "sock", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "sock_impl_set_options", 00:21:32.275 "params": { 00:21:32.275 "enable_ktls": false, 00:21:32.275 "enable_placement_id": 0, 00:21:32.275 "enable_quickack": false, 00:21:32.275 "enable_recv_pipe": true, 00:21:32.275 "enable_zerocopy_send_client": false, 00:21:32.275 "enable_zerocopy_send_server": true, 00:21:32.275 "impl_name": "posix", 00:21:32.275 "recv_buf_size": 2097152, 00:21:32.275 "send_buf_size": 2097152, 00:21:32.275 "tls_version": 0, 00:21:32.275 "zerocopy_threshold": 0 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "sock_impl_set_options", 00:21:32.275 "params": { 00:21:32.275 "enable_ktls": false, 00:21:32.275 "enable_placement_id": 0, 00:21:32.275 "enable_quickack": false, 00:21:32.275 "enable_recv_pipe": true, 00:21:32.275 "enable_zerocopy_send_client": false, 00:21:32.275 "enable_zerocopy_send_server": true, 00:21:32.275 "impl_name": "ssl", 00:21:32.275 "recv_buf_size": 4096, 00:21:32.275 "send_buf_size": 4096, 00:21:32.275 "tls_version": 0, 00:21:32.275 "zerocopy_threshold": 0 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "vmd", 00:21:32.275 "config": [] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "accel", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "accel_set_options", 00:21:32.275 "params": { 00:21:32.275 "buf_count": 2048, 00:21:32.275 "large_cache_size": 16, 00:21:32.275 "sequence_count": 2048, 00:21:32.275 "small_cache_size": 128, 00:21:32.275 "task_count": 2048 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "bdev", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "bdev_set_options", 00:21:32.275 "params": { 00:21:32.275 "bdev_auto_examine": true, 00:21:32.275 "bdev_io_cache_size": 256, 00:21:32.275 "bdev_io_pool_size": 65535, 00:21:32.275 "iobuf_large_cache_size": 16, 00:21:32.275 "iobuf_small_cache_size": 128 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_raid_set_options", 00:21:32.275 "params": { 00:21:32.275 "process_window_size_kb": 1024 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_iscsi_set_options", 00:21:32.275 "params": { 00:21:32.275 "timeout_sec": 30 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_nvme_set_options", 00:21:32.275 "params": { 00:21:32.275 "action_on_timeout": "none", 00:21:32.275 "allow_accel_sequence": false, 00:21:32.275 "arbitration_burst": 0, 00:21:32.275 "bdev_retry_count": 3, 00:21:32.275 "ctrlr_loss_timeout_sec": 0, 00:21:32.275 "delay_cmd_submit": true, 00:21:32.275 "fast_io_fail_timeout_sec": 0, 00:21:32.275 "generate_uuids": false, 00:21:32.275 "high_priority_weight": 0, 00:21:32.275 "io_path_stat": false, 00:21:32.275 "io_queue_requests": 512, 00:21:32.275 "keep_alive_timeout_ms": 10000, 00:21:32.275 "low_priority_weight": 0, 00:21:32.275 "medium_priority_weight": 0, 00:21:32.275 "nvme_adminq_poll_period_us": 10000, 00:21:32.275 "nvme_ioq_poll_period_us": 0, 00:21:32.275 "reconnect_delay_sec": 0, 00:21:32.275 "timeout_admin_us": 0, 00:21:32.275 "timeout_us": 0, 00:21:32.275 "transport_ack_timeout": 0, 00:21:32.275 "transport_retry_count": 4, 00:21:32.275 "transport_tos": 0 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_nvme_attach_controller", 00:21:32.275 "params": { 00:21:32.275 "adrfam": "IPv4", 00:21:32.276 "ctrlr_loss_timeout_sec": 0, 00:21:32.276 "ddgst": false, 00:21:32.276 "fast_io_fail_timeout_sec": 0, 00:21:32.276 "hdgst": false, 00:21:32.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.276 "name": "TLSTEST", 00:21:32.276 "prchk_guard": false, 00:21:32.276 "prchk_reftag": false, 00:21:32.276 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:21:32.276 "reconnect_delay_sec": 0, 00:21:32.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.276 "traddr": "10.0.0.2", 00:21:32.276 "trsvcid": "4420", 00:21:32.276 "trtype": "TCP" 00:21:32.276 } 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "method": "bdev_nvme_set_hotplug", 00:21:32.276 "params": { 00:21:32.276 "enable": false, 00:21:32.276 "period_us": 100000 00:21:32.276 } 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "method": "bdev_wait_for_examine" 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "nbd", 00:21:32.276 "config": [] 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 }' 00:21:32.276 [2024-12-06 14:35:39.061746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:32.276 [2024-12-06 14:35:39.061863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79458 ] 00:21:32.276 [2024-12-06 14:35:39.200254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.534 [2024-12-06 14:35:39.334696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.534 [2024-12-06 14:35:39.498867] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.467 14:35:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.467 14:35:40 -- common/autotest_common.sh@862 -- # return 0 00:21:33.467 14:35:40 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:33.467 Running I/O for 10 seconds... 00:21:43.441 00:21:43.441 Latency(us) 00:21:43.441 [2024-12-06T14:35:50.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.441 [2024-12-06T14:35:50.411Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:43.441 Verification LBA range: start 0x0 length 0x2000 00:21:43.441 TLSTESTn1 : 10.01 5577.64 21.79 0.00 0.00 22918.15 2695.91 26810.18 00:21:43.441 [2024-12-06T14:35:50.411Z] =================================================================================================================== 00:21:43.441 [2024-12-06T14:35:50.411Z] Total : 5577.64 21.79 0.00 0.00 22918.15 2695.91 26810.18 00:21:43.441 0 00:21:43.441 14:35:50 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.441 14:35:50 -- target/tls.sh@223 -- # killprocess 79458 00:21:43.441 14:35:50 -- common/autotest_common.sh@936 -- # '[' -z 79458 ']' 00:21:43.441 14:35:50 -- common/autotest_common.sh@940 -- # kill -0 79458 00:21:43.441 14:35:50 -- common/autotest_common.sh@941 -- # uname 00:21:43.441 14:35:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.441 14:35:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79458 00:21:43.441 14:35:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:43.441 14:35:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:43.441 killing process with pid 79458 00:21:43.441 14:35:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79458' 00:21:43.441 14:35:50 -- common/autotest_common.sh@955 -- # kill 79458 00:21:43.441 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.441 00:21:43.441 Latency(us) 00:21:43.441 [2024-12-06T14:35:50.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.441 [2024-12-06T14:35:50.411Z] =================================================================================================================== 00:21:43.441 [2024-12-06T14:35:50.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.441 14:35:50 -- common/autotest_common.sh@960 -- # wait 79458 00:21:43.700 14:35:50 -- target/tls.sh@224 -- # killprocess 79414 00:21:43.700 14:35:50 -- common/autotest_common.sh@936 -- # '[' -z 79414 ']' 00:21:43.700 14:35:50 -- common/autotest_common.sh@940 -- # kill -0 79414 00:21:43.700 14:35:50 -- common/autotest_common.sh@941 -- # uname 00:21:43.700 14:35:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.700 14:35:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79414 00:21:43.700 14:35:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:43.700 14:35:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:43.700 killing process with pid 79414 00:21:43.700 14:35:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79414' 00:21:43.700 14:35:50 -- common/autotest_common.sh@955 -- # kill 79414 00:21:43.700 14:35:50 -- common/autotest_common.sh@960 -- # wait 79414 00:21:43.959 14:35:50 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:43.959 14:35:50 -- target/tls.sh@227 -- # cleanup 00:21:43.959 14:35:50 -- target/tls.sh@15 -- # process_shm --id 0 00:21:43.959 14:35:50 -- common/autotest_common.sh@806 -- # type=--id 00:21:43.959 14:35:50 -- common/autotest_common.sh@807 -- # id=0 00:21:43.959 14:35:50 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:43.959 14:35:50 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:43.959 14:35:50 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:43.959 14:35:50 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:43.959 14:35:50 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:43.959 14:35:50 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:43.959 nvmf_trace.0 00:21:43.959 14:35:50 -- common/autotest_common.sh@821 -- # return 0 00:21:43.959 14:35:50 -- target/tls.sh@16 -- # killprocess 79458 00:21:43.959 14:35:50 -- common/autotest_common.sh@936 -- # '[' -z 79458 ']' 00:21:43.959 14:35:50 -- common/autotest_common.sh@940 -- # kill -0 79458 00:21:43.959 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79458) - No such process 00:21:43.959 Process with pid 79458 is not found 00:21:43.959 14:35:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79458 is not found' 00:21:43.959 14:35:50 -- target/tls.sh@17 -- # nvmftestfini 00:21:43.959 14:35:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:43.959 14:35:50 -- nvmf/common.sh@116 -- # sync 00:21:44.218 14:35:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:44.218 14:35:50 -- nvmf/common.sh@119 -- # set +e 00:21:44.218 14:35:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:44.218 14:35:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:44.218 rmmod nvme_tcp 00:21:44.218 rmmod nvme_fabrics 00:21:44.218 rmmod nvme_keyring 00:21:44.218 14:35:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:44.218 14:35:51 -- nvmf/common.sh@123 -- # set -e 00:21:44.218 14:35:51 -- nvmf/common.sh@124 -- # return 0 00:21:44.218 14:35:51 -- nvmf/common.sh@477 -- # '[' -n 79414 ']' 00:21:44.218 14:35:51 -- nvmf/common.sh@478 -- # killprocess 79414 00:21:44.218 14:35:51 -- common/autotest_common.sh@936 -- # '[' -z 79414 ']' 00:21:44.218 14:35:51 -- common/autotest_common.sh@940 -- # kill -0 79414 00:21:44.218 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79414) - No such process 00:21:44.218 Process with pid 79414 is not found 00:21:44.218 14:35:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79414 is not found' 00:21:44.218 14:35:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:44.218 14:35:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:44.218 14:35:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:44.218 14:35:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.218 14:35:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:44.218 14:35:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.218 14:35:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.218 14:35:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.218 14:35:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:44.218 14:35:51 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:21:44.218 ************************************ 00:21:44.218 END TEST nvmf_tls 00:21:44.218 ************************************ 00:21:44.218 00:21:44.219 real 1m14.155s 00:21:44.219 user 1m54.867s 00:21:44.219 sys 0m25.387s 00:21:44.219 14:35:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:44.219 14:35:51 -- common/autotest_common.sh@10 -- # set +x 00:21:44.219 14:35:51 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:44.219 14:35:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:44.219 14:35:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.219 14:35:51 -- common/autotest_common.sh@10 -- # set +x 00:21:44.219 ************************************ 00:21:44.219 START TEST nvmf_fips 00:21:44.219 ************************************ 00:21:44.219 14:35:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:44.478 * Looking for test storage... 00:21:44.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:21:44.478 14:35:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:44.478 14:35:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:44.478 14:35:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:44.478 14:35:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:44.478 14:35:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:44.478 14:35:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:44.478 14:35:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:44.478 14:35:51 -- scripts/common.sh@335 -- # IFS=.-: 00:21:44.478 14:35:51 -- scripts/common.sh@335 -- # read -ra ver1 00:21:44.478 14:35:51 -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.479 14:35:51 -- scripts/common.sh@336 -- # read -ra ver2 00:21:44.479 14:35:51 -- scripts/common.sh@337 -- # local 'op=<' 00:21:44.479 14:35:51 -- scripts/common.sh@339 -- # ver1_l=2 00:21:44.479 14:35:51 -- scripts/common.sh@340 -- # ver2_l=1 00:21:44.479 14:35:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:44.479 14:35:51 -- scripts/common.sh@343 -- # case "$op" in 00:21:44.479 14:35:51 -- scripts/common.sh@344 -- # : 1 00:21:44.479 14:35:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:44.479 14:35:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.479 14:35:51 -- scripts/common.sh@364 -- # decimal 1 00:21:44.479 14:35:51 -- scripts/common.sh@352 -- # local d=1 00:21:44.479 14:35:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.479 14:35:51 -- scripts/common.sh@354 -- # echo 1 00:21:44.479 14:35:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:44.479 14:35:51 -- scripts/common.sh@365 -- # decimal 2 00:21:44.479 14:35:51 -- scripts/common.sh@352 -- # local d=2 00:21:44.479 14:35:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.479 14:35:51 -- scripts/common.sh@354 -- # echo 2 00:21:44.479 14:35:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:44.479 14:35:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:44.479 14:35:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:44.479 14:35:51 -- scripts/common.sh@367 -- # return 0 00:21:44.479 14:35:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.479 14:35:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.479 --rc genhtml_branch_coverage=1 00:21:44.479 --rc genhtml_function_coverage=1 00:21:44.479 --rc genhtml_legend=1 00:21:44.479 --rc geninfo_all_blocks=1 00:21:44.479 --rc geninfo_unexecuted_blocks=1 00:21:44.479 00:21:44.479 ' 00:21:44.479 14:35:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.479 --rc genhtml_branch_coverage=1 00:21:44.479 --rc genhtml_function_coverage=1 00:21:44.479 --rc genhtml_legend=1 00:21:44.479 --rc geninfo_all_blocks=1 00:21:44.479 --rc geninfo_unexecuted_blocks=1 00:21:44.479 00:21:44.479 ' 00:21:44.479 14:35:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.479 --rc genhtml_branch_coverage=1 00:21:44.479 --rc genhtml_function_coverage=1 00:21:44.479 --rc genhtml_legend=1 00:21:44.479 --rc geninfo_all_blocks=1 00:21:44.479 --rc geninfo_unexecuted_blocks=1 00:21:44.479 00:21:44.479 ' 00:21:44.479 14:35:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.479 --rc genhtml_branch_coverage=1 00:21:44.479 --rc genhtml_function_coverage=1 00:21:44.479 --rc genhtml_legend=1 00:21:44.479 --rc geninfo_all_blocks=1 00:21:44.479 --rc geninfo_unexecuted_blocks=1 00:21:44.479 00:21:44.479 ' 00:21:44.479 14:35:51 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.479 14:35:51 -- nvmf/common.sh@7 -- # uname -s 00:21:44.479 14:35:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.479 14:35:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.479 14:35:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.479 14:35:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.479 14:35:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.479 14:35:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.479 14:35:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.479 14:35:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.479 14:35:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.479 14:35:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.479 14:35:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:21:44.479 14:35:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:21:44.479 14:35:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.479 14:35:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.479 14:35:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.479 14:35:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.479 14:35:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.479 14:35:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.479 14:35:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.479 14:35:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.479 14:35:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.479 14:35:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.479 14:35:51 -- paths/export.sh@5 -- # export PATH 00:21:44.479 14:35:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.479 14:35:51 -- nvmf/common.sh@46 -- # : 0 00:21:44.479 14:35:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:44.479 14:35:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:44.479 14:35:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:44.479 14:35:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.479 14:35:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.479 14:35:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:44.479 14:35:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:44.479 14:35:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:44.479 14:35:51 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.480 14:35:51 -- fips/fips.sh@89 -- # check_openssl_version 00:21:44.480 14:35:51 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:44.480 14:35:51 -- fips/fips.sh@85 -- # openssl version 00:21:44.480 14:35:51 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:44.480 14:35:51 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:21:44.480 14:35:51 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:44.480 14:35:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:44.480 14:35:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:44.480 14:35:51 -- scripts/common.sh@335 -- # IFS=.-: 00:21:44.480 14:35:51 -- scripts/common.sh@335 -- # read -ra ver1 00:21:44.480 14:35:51 -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.480 14:35:51 -- scripts/common.sh@336 -- # read -ra ver2 00:21:44.480 14:35:51 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:44.480 14:35:51 -- scripts/common.sh@339 -- # ver1_l=3 00:21:44.480 14:35:51 -- scripts/common.sh@340 -- # ver2_l=3 00:21:44.480 14:35:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:44.480 14:35:51 -- scripts/common.sh@343 -- # case "$op" in 00:21:44.480 14:35:51 -- scripts/common.sh@347 -- # : 1 00:21:44.480 14:35:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:44.480 14:35:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.480 14:35:51 -- scripts/common.sh@364 -- # decimal 3 00:21:44.480 14:35:51 -- scripts/common.sh@352 -- # local d=3 00:21:44.480 14:35:51 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:44.480 14:35:51 -- scripts/common.sh@354 -- # echo 3 00:21:44.480 14:35:51 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:44.480 14:35:51 -- scripts/common.sh@365 -- # decimal 3 00:21:44.480 14:35:51 -- scripts/common.sh@352 -- # local d=3 00:21:44.480 14:35:51 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:44.480 14:35:51 -- scripts/common.sh@354 -- # echo 3 00:21:44.480 14:35:51 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:44.480 14:35:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:44.480 14:35:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:44.480 14:35:51 -- scripts/common.sh@363 -- # (( v++ )) 00:21:44.480 14:35:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.480 14:35:51 -- scripts/common.sh@364 -- # decimal 1 00:21:44.480 14:35:51 -- scripts/common.sh@352 -- # local d=1 00:21:44.480 14:35:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.480 14:35:51 -- scripts/common.sh@354 -- # echo 1 00:21:44.480 14:35:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:44.480 14:35:51 -- scripts/common.sh@365 -- # decimal 0 00:21:44.480 14:35:51 -- scripts/common.sh@352 -- # local d=0 00:21:44.480 14:35:51 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:44.480 14:35:51 -- scripts/common.sh@354 -- # echo 0 00:21:44.480 14:35:51 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:44.480 14:35:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:44.480 14:35:51 -- scripts/common.sh@366 -- # return 0 00:21:44.480 14:35:51 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:44.480 14:35:51 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:44.480 14:35:51 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:44.480 14:35:51 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:44.480 14:35:51 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:44.480 14:35:51 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:44.480 14:35:51 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:44.480 14:35:51 -- fips/fips.sh@113 -- # build_openssl_config 00:21:44.480 14:35:51 -- fips/fips.sh@37 -- # cat 00:21:44.480 14:35:51 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:44.480 14:35:51 -- fips/fips.sh@58 -- # cat - 00:21:44.480 14:35:51 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:44.480 14:35:51 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:44.480 14:35:51 -- fips/fips.sh@116 -- # mapfile -t providers 00:21:44.480 14:35:51 -- fips/fips.sh@116 -- # grep name 00:21:44.480 14:35:51 -- fips/fips.sh@116 -- # openssl list -providers 00:21:44.480 14:35:51 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:44.480 14:35:51 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:44.480 14:35:51 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:44.739 14:35:51 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:44.739 14:35:51 -- common/autotest_common.sh@650 -- # local es=0 00:21:44.739 14:35:51 -- fips/fips.sh@127 -- # : 00:21:44.739 14:35:51 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:44.739 14:35:51 -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:44.739 14:35:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.739 14:35:51 -- common/autotest_common.sh@642 -- # type -t openssl 00:21:44.739 14:35:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.739 14:35:51 -- common/autotest_common.sh@644 -- # type -P openssl 00:21:44.739 14:35:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.739 14:35:51 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:44.739 14:35:51 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:44.739 14:35:51 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:44.739 Error setting digest 00:21:44.739 40B247644F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:44.739 40B247644F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:44.739 14:35:51 -- common/autotest_common.sh@653 -- # es=1 00:21:44.739 14:35:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.739 14:35:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.739 14:35:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.739 14:35:51 -- fips/fips.sh@130 -- # nvmftestinit 00:21:44.739 14:35:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:44.739 14:35:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.739 14:35:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:44.739 14:35:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:44.739 14:35:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:44.739 14:35:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.739 14:35:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.739 14:35:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.739 14:35:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:44.739 14:35:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:44.739 14:35:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:44.739 14:35:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:44.739 14:35:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:44.739 14:35:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:44.739 14:35:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.739 14:35:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.739 14:35:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.739 14:35:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:44.739 14:35:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.739 14:35:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.739 14:35:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.739 14:35:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.739 14:35:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.739 14:35:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.739 14:35:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.739 14:35:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.739 14:35:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:44.739 14:35:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:44.739 Cannot find device "nvmf_tgt_br" 00:21:44.739 14:35:51 -- nvmf/common.sh@154 -- # true 00:21:44.739 14:35:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.739 Cannot find device "nvmf_tgt_br2" 00:21:44.739 14:35:51 -- nvmf/common.sh@155 -- # true 00:21:44.740 14:35:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:44.740 14:35:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:44.740 Cannot find device "nvmf_tgt_br" 00:21:44.740 14:35:51 -- nvmf/common.sh@157 -- # true 00:21:44.740 14:35:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:44.740 Cannot find device "nvmf_tgt_br2" 00:21:44.740 14:35:51 -- nvmf/common.sh@158 -- # true 00:21:44.740 14:35:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:44.740 14:35:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:44.740 14:35:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.740 14:35:51 -- nvmf/common.sh@161 -- # true 00:21:44.740 14:35:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.740 14:35:51 -- nvmf/common.sh@162 -- # true 00:21:44.740 14:35:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.740 14:35:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:44.740 14:35:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:44.740 14:35:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:44.740 14:35:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:44.740 14:35:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:44.740 14:35:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:44.740 14:35:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:44.999 14:35:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:44.999 14:35:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:44.999 14:35:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:44.999 14:35:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:44.999 14:35:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:44.999 14:35:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:44.999 14:35:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:44.999 14:35:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.999 14:35:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:44.999 14:35:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:44.999 14:35:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.999 14:35:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.999 14:35:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.999 14:35:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.999 14:35:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.999 14:35:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:44.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:44.999 00:21:44.999 --- 10.0.0.2 ping statistics --- 00:21:44.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.999 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:44.999 14:35:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:44.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:21:44.999 00:21:44.999 --- 10.0.0.3 ping statistics --- 00:21:44.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.999 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:44.999 14:35:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:21:44.999 00:21:44.999 --- 10.0.0.1 ping statistics --- 00:21:44.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.999 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:44.999 14:35:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.999 14:35:51 -- nvmf/common.sh@421 -- # return 0 00:21:44.999 14:35:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:44.999 14:35:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.999 14:35:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:44.999 14:35:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:44.999 14:35:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.999 14:35:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:44.999 14:35:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:44.999 14:35:51 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:44.999 14:35:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:44.999 14:35:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.999 14:35:51 -- common/autotest_common.sh@10 -- # set +x 00:21:44.999 14:35:51 -- nvmf/common.sh@469 -- # nvmfpid=79824 00:21:44.999 14:35:51 -- nvmf/common.sh@470 -- # waitforlisten 79824 00:21:44.999 14:35:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.999 14:35:51 -- common/autotest_common.sh@829 -- # '[' -z 79824 ']' 00:21:44.999 14:35:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.999 14:35:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.999 14:35:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.999 14:35:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.999 14:35:51 -- common/autotest_common.sh@10 -- # set +x 00:21:44.999 [2024-12-06 14:35:51.946904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:45.000 [2024-12-06 14:35:51.947106] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.317 [2024-12-06 14:35:52.087294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.317 [2024-12-06 14:35:52.215369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.317 [2024-12-06 14:35:52.215621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.317 [2024-12-06 14:35:52.215638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.317 [2024-12-06 14:35:52.215648] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.317 [2024-12-06 14:35:52.215686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.268 14:35:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.268 14:35:52 -- common/autotest_common.sh@862 -- # return 0 00:21:46.268 14:35:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:46.268 14:35:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.268 14:35:52 -- common/autotest_common.sh@10 -- # set +x 00:21:46.268 14:35:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.268 14:35:52 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:46.268 14:35:52 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:46.268 14:35:52 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:46.268 14:35:52 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:46.268 14:35:52 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:46.268 14:35:53 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:46.268 14:35:53 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:46.268 14:35:53 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.268 [2024-12-06 14:35:53.228219] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.527 [2024-12-06 14:35:53.244108] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.527 [2024-12-06 14:35:53.244345] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.527 malloc0 00:21:46.527 14:35:53 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.527 14:35:53 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.527 14:35:53 -- fips/fips.sh@147 -- # bdevperf_pid=79882 00:21:46.527 14:35:53 -- fips/fips.sh@148 -- # waitforlisten 79882 /var/tmp/bdevperf.sock 00:21:46.527 14:35:53 -- common/autotest_common.sh@829 -- # '[' -z 79882 ']' 00:21:46.527 14:35:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.527 14:35:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.527 14:35:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.527 14:35:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.527 14:35:53 -- common/autotest_common.sh@10 -- # set +x 00:21:46.527 [2024-12-06 14:35:53.369144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:46.527 [2024-12-06 14:35:53.369805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79882 ] 00:21:46.786 [2024-12-06 14:35:53.506232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.786 [2024-12-06 14:35:53.635925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.722 14:35:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.722 14:35:54 -- common/autotest_common.sh@862 -- # return 0 00:21:47.722 14:35:54 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:47.722 [2024-12-06 14:35:54.635121] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.980 TLSTESTn1 00:21:47.980 14:35:54 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.980 Running I/O for 10 seconds... 00:21:57.954 00:21:57.954 Latency(us) 00:21:57.954 [2024-12-06T14:36:04.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.954 [2024-12-06T14:36:04.924Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:57.954 Verification LBA range: start 0x0 length 0x2000 00:21:57.954 TLSTESTn1 : 10.02 5452.21 21.30 0.00 0.00 23437.56 5719.51 31933.91 00:21:57.954 [2024-12-06T14:36:04.924Z] =================================================================================================================== 00:21:57.954 [2024-12-06T14:36:04.924Z] Total : 5452.21 21.30 0.00 0.00 23437.56 5719.51 31933.91 00:21:57.954 0 00:21:57.954 14:36:04 -- fips/fips.sh@1 -- # cleanup 00:21:57.954 14:36:04 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:57.954 14:36:04 -- common/autotest_common.sh@806 -- # type=--id 00:21:57.954 14:36:04 -- common/autotest_common.sh@807 -- # id=0 00:21:57.954 14:36:04 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:57.954 14:36:04 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:57.954 14:36:04 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:57.954 14:36:04 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:57.954 14:36:04 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:57.955 14:36:04 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:57.955 nvmf_trace.0 00:21:58.213 14:36:04 -- common/autotest_common.sh@821 -- # return 0 00:21:58.213 14:36:04 -- fips/fips.sh@16 -- # killprocess 79882 00:21:58.213 14:36:04 -- common/autotest_common.sh@936 -- # '[' -z 79882 ']' 00:21:58.213 14:36:04 -- common/autotest_common.sh@940 -- # kill -0 79882 00:21:58.213 14:36:04 -- common/autotest_common.sh@941 -- # uname 00:21:58.213 14:36:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.213 14:36:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79882 00:21:58.213 14:36:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:58.213 14:36:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:58.214 14:36:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79882' 00:21:58.214 killing process with pid 79882 00:21:58.214 14:36:04 -- common/autotest_common.sh@955 -- # kill 79882 00:21:58.214 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.214 00:21:58.214 Latency(us) 00:21:58.214 [2024-12-06T14:36:05.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.214 [2024-12-06T14:36:05.184Z] =================================================================================================================== 00:21:58.214 [2024-12-06T14:36:05.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.214 14:36:04 -- common/autotest_common.sh@960 -- # wait 79882 00:21:58.471 14:36:05 -- fips/fips.sh@17 -- # nvmftestfini 00:21:58.471 14:36:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:58.471 14:36:05 -- nvmf/common.sh@116 -- # sync 00:21:58.471 14:36:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:58.471 14:36:05 -- nvmf/common.sh@119 -- # set +e 00:21:58.471 14:36:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:58.471 14:36:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:58.471 rmmod nvme_tcp 00:21:58.471 rmmod nvme_fabrics 00:21:58.471 rmmod nvme_keyring 00:21:58.471 14:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:58.471 14:36:05 -- nvmf/common.sh@123 -- # set -e 00:21:58.471 14:36:05 -- nvmf/common.sh@124 -- # return 0 00:21:58.471 14:36:05 -- nvmf/common.sh@477 -- # '[' -n 79824 ']' 00:21:58.471 14:36:05 -- nvmf/common.sh@478 -- # killprocess 79824 00:21:58.471 14:36:05 -- common/autotest_common.sh@936 -- # '[' -z 79824 ']' 00:21:58.471 14:36:05 -- common/autotest_common.sh@940 -- # kill -0 79824 00:21:58.471 14:36:05 -- common/autotest_common.sh@941 -- # uname 00:21:58.471 14:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.471 14:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79824 00:21:58.471 killing process with pid 79824 00:21:58.471 14:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:58.471 14:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:58.471 14:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79824' 00:21:58.471 14:36:05 -- common/autotest_common.sh@955 -- # kill 79824 00:21:58.471 14:36:05 -- common/autotest_common.sh@960 -- # wait 79824 00:21:58.728 14:36:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:58.728 14:36:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:58.728 14:36:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:58.728 14:36:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.728 14:36:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:58.728 14:36:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.728 14:36:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.728 14:36:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.728 14:36:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:58.728 14:36:05 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:58.728 ************************************ 00:21:58.728 END TEST nvmf_fips 00:21:58.728 ************************************ 00:21:58.728 00:21:58.728 real 0m14.566s 00:21:58.728 user 0m19.595s 00:21:58.728 sys 0m5.909s 00:21:58.728 14:36:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:58.728 14:36:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.985 14:36:05 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:58.985 14:36:05 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:58.986 14:36:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:58.986 14:36:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.986 14:36:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.986 ************************************ 00:21:58.986 START TEST nvmf_fuzz 00:21:58.986 ************************************ 00:21:58.986 14:36:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:58.986 * Looking for test storage... 00:21:58.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:58.986 14:36:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:58.986 14:36:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:58.986 14:36:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:58.986 14:36:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:58.986 14:36:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:58.986 14:36:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:58.986 14:36:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:58.986 14:36:05 -- scripts/common.sh@335 -- # IFS=.-: 00:21:58.986 14:36:05 -- scripts/common.sh@335 -- # read -ra ver1 00:21:58.986 14:36:05 -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.986 14:36:05 -- scripts/common.sh@336 -- # read -ra ver2 00:21:58.986 14:36:05 -- scripts/common.sh@337 -- # local 'op=<' 00:21:58.986 14:36:05 -- scripts/common.sh@339 -- # ver1_l=2 00:21:58.986 14:36:05 -- scripts/common.sh@340 -- # ver2_l=1 00:21:58.986 14:36:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:58.986 14:36:05 -- scripts/common.sh@343 -- # case "$op" in 00:21:58.986 14:36:05 -- scripts/common.sh@344 -- # : 1 00:21:58.986 14:36:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:58.986 14:36:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.986 14:36:05 -- scripts/common.sh@364 -- # decimal 1 00:21:58.986 14:36:05 -- scripts/common.sh@352 -- # local d=1 00:21:58.986 14:36:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.986 14:36:05 -- scripts/common.sh@354 -- # echo 1 00:21:58.986 14:36:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:58.986 14:36:05 -- scripts/common.sh@365 -- # decimal 2 00:21:58.986 14:36:05 -- scripts/common.sh@352 -- # local d=2 00:21:58.986 14:36:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.986 14:36:05 -- scripts/common.sh@354 -- # echo 2 00:21:58.986 14:36:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:58.986 14:36:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:58.986 14:36:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:58.986 14:36:05 -- scripts/common.sh@367 -- # return 0 00:21:58.986 14:36:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.986 14:36:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:58.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.986 --rc genhtml_branch_coverage=1 00:21:58.986 --rc genhtml_function_coverage=1 00:21:58.986 --rc genhtml_legend=1 00:21:58.986 --rc geninfo_all_blocks=1 00:21:58.986 --rc geninfo_unexecuted_blocks=1 00:21:58.986 00:21:58.986 ' 00:21:58.986 14:36:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:58.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.986 --rc genhtml_branch_coverage=1 00:21:58.986 --rc genhtml_function_coverage=1 00:21:58.986 --rc genhtml_legend=1 00:21:58.986 --rc geninfo_all_blocks=1 00:21:58.986 --rc geninfo_unexecuted_blocks=1 00:21:58.986 00:21:58.986 ' 00:21:58.986 14:36:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:58.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.986 --rc genhtml_branch_coverage=1 00:21:58.986 --rc genhtml_function_coverage=1 00:21:58.986 --rc genhtml_legend=1 00:21:58.986 --rc geninfo_all_blocks=1 00:21:58.986 --rc geninfo_unexecuted_blocks=1 00:21:58.986 00:21:58.986 ' 00:21:58.986 14:36:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:58.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.986 --rc genhtml_branch_coverage=1 00:21:58.986 --rc genhtml_function_coverage=1 00:21:58.986 --rc genhtml_legend=1 00:21:58.986 --rc geninfo_all_blocks=1 00:21:58.986 --rc geninfo_unexecuted_blocks=1 00:21:58.986 00:21:58.986 ' 00:21:58.986 14:36:05 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:58.986 14:36:05 -- nvmf/common.sh@7 -- # uname -s 00:21:58.986 14:36:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.986 14:36:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.986 14:36:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.986 14:36:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.986 14:36:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.986 14:36:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.986 14:36:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.986 14:36:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.986 14:36:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.986 14:36:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.986 14:36:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:21:58.986 14:36:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:21:58.986 14:36:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.986 14:36:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.986 14:36:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:58.986 14:36:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.986 14:36:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.986 14:36:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.986 14:36:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.986 14:36:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.986 14:36:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.986 14:36:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.986 14:36:05 -- paths/export.sh@5 -- # export PATH 00:21:58.986 14:36:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.986 14:36:05 -- nvmf/common.sh@46 -- # : 0 00:21:58.986 14:36:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:58.986 14:36:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:58.986 14:36:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:58.986 14:36:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.986 14:36:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.986 14:36:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:58.986 14:36:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:58.986 14:36:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:59.244 14:36:05 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:59.244 14:36:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:59.244 14:36:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.244 14:36:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:59.244 14:36:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:59.244 14:36:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:59.244 14:36:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.244 14:36:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.244 14:36:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.244 14:36:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:59.244 14:36:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:59.244 14:36:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:59.244 14:36:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:59.244 14:36:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:59.244 14:36:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:59.244 14:36:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.244 14:36:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.244 14:36:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.244 14:36:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:59.244 14:36:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.244 14:36:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.244 14:36:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.244 14:36:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.244 14:36:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.244 14:36:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.244 14:36:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.244 14:36:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.244 14:36:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:59.244 14:36:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:59.244 Cannot find device "nvmf_tgt_br" 00:21:59.244 14:36:05 -- nvmf/common.sh@154 -- # true 00:21:59.244 14:36:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.244 Cannot find device "nvmf_tgt_br2" 00:21:59.244 14:36:06 -- nvmf/common.sh@155 -- # true 00:21:59.244 14:36:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:59.244 14:36:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:59.244 Cannot find device "nvmf_tgt_br" 00:21:59.244 14:36:06 -- nvmf/common.sh@157 -- # true 00:21:59.244 14:36:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.244 Cannot find device "nvmf_tgt_br2" 00:21:59.244 14:36:06 -- nvmf/common.sh@158 -- # true 00:21:59.244 14:36:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:59.244 14:36:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:59.244 14:36:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.244 14:36:06 -- nvmf/common.sh@161 -- # true 00:21:59.244 14:36:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.244 14:36:06 -- nvmf/common.sh@162 -- # true 00:21:59.244 14:36:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.244 14:36:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.244 14:36:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.244 14:36:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.244 14:36:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.244 14:36:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.244 14:36:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.244 14:36:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.244 14:36:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.245 14:36:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:59.245 14:36:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:59.245 14:36:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:59.245 14:36:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:59.245 14:36:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.245 14:36:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.245 14:36:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.503 14:36:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.503 14:36:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.503 14:36:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.503 14:36:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.503 14:36:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.503 14:36:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.503 14:36:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.503 14:36:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:59.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:21:59.503 00:21:59.503 --- 10.0.0.2 ping statistics --- 00:21:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.503 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:59.503 14:36:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:59.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:21:59.503 00:21:59.503 --- 10.0.0.3 ping statistics --- 00:21:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.503 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:59.503 14:36:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:59.503 00:21:59.503 --- 10.0.0.1 ping statistics --- 00:21:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.503 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:59.503 14:36:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.503 14:36:06 -- nvmf/common.sh@421 -- # return 0 00:21:59.503 14:36:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.503 14:36:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.503 14:36:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.503 14:36:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.503 14:36:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.503 14:36:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.503 14:36:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.503 14:36:06 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=80232 00:21:59.503 14:36:06 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.503 14:36:06 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:59.503 14:36:06 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 80232 00:21:59.503 14:36:06 -- common/autotest_common.sh@829 -- # '[' -z 80232 ']' 00:21:59.503 14:36:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.503 14:36:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.503 14:36:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.503 14:36:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.503 14:36:06 -- common/autotest_common.sh@10 -- # set +x 00:22:00.908 14:36:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.908 14:36:07 -- common/autotest_common.sh@862 -- # return 0 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.908 14:36:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.908 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.908 14:36:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:00.908 14:36:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.908 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.908 Malloc0 00:22:00.908 14:36:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.908 14:36:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.908 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.908 14:36:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.908 14:36:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.908 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.908 14:36:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.908 14:36:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.908 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.908 14:36:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:00.908 14:36:07 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:01.167 Shutting down the fuzz application 00:22:01.167 14:36:07 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:01.426 Shutting down the fuzz application 00:22:01.426 14:36:08 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.426 14:36:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.426 14:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.426 14:36:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.426 14:36:08 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:01.426 14:36:08 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:01.426 14:36:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:01.426 14:36:08 -- nvmf/common.sh@116 -- # sync 00:22:01.684 14:36:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:01.684 14:36:08 -- nvmf/common.sh@119 -- # set +e 00:22:01.684 14:36:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:01.684 14:36:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:01.684 rmmod nvme_tcp 00:22:01.684 rmmod nvme_fabrics 00:22:01.684 rmmod nvme_keyring 00:22:01.684 14:36:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:01.684 14:36:08 -- nvmf/common.sh@123 -- # set -e 00:22:01.684 14:36:08 -- nvmf/common.sh@124 -- # return 0 00:22:01.684 14:36:08 -- nvmf/common.sh@477 -- # '[' -n 80232 ']' 00:22:01.684 14:36:08 -- nvmf/common.sh@478 -- # killprocess 80232 00:22:01.684 14:36:08 -- common/autotest_common.sh@936 -- # '[' -z 80232 ']' 00:22:01.684 14:36:08 -- common/autotest_common.sh@940 -- # kill -0 80232 00:22:01.684 14:36:08 -- common/autotest_common.sh@941 -- # uname 00:22:01.684 14:36:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:01.684 14:36:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80232 00:22:01.684 killing process with pid 80232 00:22:01.684 14:36:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:01.684 14:36:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:01.684 14:36:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80232' 00:22:01.684 14:36:08 -- common/autotest_common.sh@955 -- # kill 80232 00:22:01.684 14:36:08 -- common/autotest_common.sh@960 -- # wait 80232 00:22:01.943 14:36:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:01.943 14:36:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:01.943 14:36:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:01.943 14:36:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.943 14:36:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:01.943 14:36:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.943 14:36:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.943 14:36:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.943 14:36:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:01.943 14:36:08 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:22:01.943 ************************************ 00:22:01.943 END TEST nvmf_fuzz 00:22:01.943 ************************************ 00:22:01.943 00:22:01.943 real 0m3.100s 00:22:01.943 user 0m3.389s 00:22:01.943 sys 0m0.728s 00:22:01.943 14:36:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:01.943 14:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.943 14:36:08 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:01.943 14:36:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:01.943 14:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.943 14:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.943 ************************************ 00:22:01.943 START TEST nvmf_multiconnection 00:22:01.943 ************************************ 00:22:01.943 14:36:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:02.203 * Looking for test storage... 00:22:02.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:02.203 14:36:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:02.203 14:36:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:02.203 14:36:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:02.203 14:36:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:02.203 14:36:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:02.203 14:36:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:02.203 14:36:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:02.203 14:36:09 -- scripts/common.sh@335 -- # IFS=.-: 00:22:02.203 14:36:09 -- scripts/common.sh@335 -- # read -ra ver1 00:22:02.203 14:36:09 -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.203 14:36:09 -- scripts/common.sh@336 -- # read -ra ver2 00:22:02.203 14:36:09 -- scripts/common.sh@337 -- # local 'op=<' 00:22:02.203 14:36:09 -- scripts/common.sh@339 -- # ver1_l=2 00:22:02.203 14:36:09 -- scripts/common.sh@340 -- # ver2_l=1 00:22:02.203 14:36:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:02.203 14:36:09 -- scripts/common.sh@343 -- # case "$op" in 00:22:02.203 14:36:09 -- scripts/common.sh@344 -- # : 1 00:22:02.203 14:36:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:02.203 14:36:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.203 14:36:09 -- scripts/common.sh@364 -- # decimal 1 00:22:02.203 14:36:09 -- scripts/common.sh@352 -- # local d=1 00:22:02.203 14:36:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.203 14:36:09 -- scripts/common.sh@354 -- # echo 1 00:22:02.203 14:36:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:02.203 14:36:09 -- scripts/common.sh@365 -- # decimal 2 00:22:02.203 14:36:09 -- scripts/common.sh@352 -- # local d=2 00:22:02.203 14:36:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.203 14:36:09 -- scripts/common.sh@354 -- # echo 2 00:22:02.203 14:36:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:02.203 14:36:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:02.203 14:36:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:02.203 14:36:09 -- scripts/common.sh@367 -- # return 0 00:22:02.203 14:36:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.203 14:36:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:02.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.203 --rc genhtml_branch_coverage=1 00:22:02.203 --rc genhtml_function_coverage=1 00:22:02.203 --rc genhtml_legend=1 00:22:02.203 --rc geninfo_all_blocks=1 00:22:02.203 --rc geninfo_unexecuted_blocks=1 00:22:02.203 00:22:02.203 ' 00:22:02.203 14:36:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:02.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.203 --rc genhtml_branch_coverage=1 00:22:02.203 --rc genhtml_function_coverage=1 00:22:02.203 --rc genhtml_legend=1 00:22:02.203 --rc geninfo_all_blocks=1 00:22:02.203 --rc geninfo_unexecuted_blocks=1 00:22:02.203 00:22:02.203 ' 00:22:02.203 14:36:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:02.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.203 --rc genhtml_branch_coverage=1 00:22:02.203 --rc genhtml_function_coverage=1 00:22:02.203 --rc genhtml_legend=1 00:22:02.203 --rc geninfo_all_blocks=1 00:22:02.203 --rc geninfo_unexecuted_blocks=1 00:22:02.203 00:22:02.203 ' 00:22:02.203 14:36:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:02.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.203 --rc genhtml_branch_coverage=1 00:22:02.203 --rc genhtml_function_coverage=1 00:22:02.203 --rc genhtml_legend=1 00:22:02.203 --rc geninfo_all_blocks=1 00:22:02.203 --rc geninfo_unexecuted_blocks=1 00:22:02.203 00:22:02.203 ' 00:22:02.203 14:36:09 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.203 14:36:09 -- nvmf/common.sh@7 -- # uname -s 00:22:02.203 14:36:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.203 14:36:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.203 14:36:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.203 14:36:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.203 14:36:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.203 14:36:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.203 14:36:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.203 14:36:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.203 14:36:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.203 14:36:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.203 14:36:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:22:02.203 14:36:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:22:02.203 14:36:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.203 14:36:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.203 14:36:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.203 14:36:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.203 14:36:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.203 14:36:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.203 14:36:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.203 14:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.203 14:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.203 14:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.203 14:36:09 -- paths/export.sh@5 -- # export PATH 00:22:02.203 14:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.203 14:36:09 -- nvmf/common.sh@46 -- # : 0 00:22:02.203 14:36:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:02.203 14:36:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:02.203 14:36:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:02.203 14:36:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.203 14:36:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.203 14:36:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:02.203 14:36:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:02.203 14:36:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:02.203 14:36:09 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.203 14:36:09 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.203 14:36:09 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:02.203 14:36:09 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:02.203 14:36:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:02.203 14:36:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.203 14:36:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:02.203 14:36:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:02.203 14:36:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:02.203 14:36:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.203 14:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.203 14:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.203 14:36:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:02.203 14:36:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:02.203 14:36:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:02.203 14:36:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:02.203 14:36:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:02.203 14:36:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:02.203 14:36:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.203 14:36:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.203 14:36:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:02.203 14:36:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:02.203 14:36:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.203 14:36:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.203 14:36:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.203 14:36:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.203 14:36:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.203 14:36:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.203 14:36:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.203 14:36:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.203 14:36:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:02.204 14:36:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:02.204 Cannot find device "nvmf_tgt_br" 00:22:02.204 14:36:09 -- nvmf/common.sh@154 -- # true 00:22:02.204 14:36:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.204 Cannot find device "nvmf_tgt_br2" 00:22:02.204 14:36:09 -- nvmf/common.sh@155 -- # true 00:22:02.204 14:36:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:02.204 14:36:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:02.462 Cannot find device "nvmf_tgt_br" 00:22:02.462 14:36:09 -- nvmf/common.sh@157 -- # true 00:22:02.462 14:36:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:02.462 Cannot find device "nvmf_tgt_br2" 00:22:02.462 14:36:09 -- nvmf/common.sh@158 -- # true 00:22:02.462 14:36:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:02.462 14:36:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:02.462 14:36:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.462 14:36:09 -- nvmf/common.sh@161 -- # true 00:22:02.462 14:36:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.462 14:36:09 -- nvmf/common.sh@162 -- # true 00:22:02.462 14:36:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.462 14:36:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.462 14:36:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.462 14:36:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.462 14:36:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.462 14:36:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.462 14:36:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.462 14:36:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:02.462 14:36:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:02.462 14:36:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:02.462 14:36:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:02.462 14:36:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:02.462 14:36:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:02.462 14:36:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:02.462 14:36:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:02.462 14:36:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:02.462 14:36:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:02.462 14:36:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:02.462 14:36:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:02.462 14:36:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:02.462 14:36:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.720 14:36:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.720 14:36:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.720 14:36:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:02.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:02.720 00:22:02.720 --- 10.0.0.2 ping statistics --- 00:22:02.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.720 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:02.720 14:36:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:02.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:02.720 00:22:02.720 --- 10.0.0.3 ping statistics --- 00:22:02.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.720 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:02.720 14:36:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:02.720 00:22:02.720 --- 10.0.0.1 ping statistics --- 00:22:02.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.720 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:02.720 14:36:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.720 14:36:09 -- nvmf/common.sh@421 -- # return 0 00:22:02.720 14:36:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:02.720 14:36:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.720 14:36:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:02.720 14:36:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:02.720 14:36:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.720 14:36:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:02.720 14:36:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:02.720 14:36:09 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:02.720 14:36:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:02.720 14:36:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.720 14:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.720 14:36:09 -- nvmf/common.sh@469 -- # nvmfpid=80458 00:22:02.720 14:36:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:02.720 14:36:09 -- nvmf/common.sh@470 -- # waitforlisten 80458 00:22:02.720 14:36:09 -- common/autotest_common.sh@829 -- # '[' -z 80458 ']' 00:22:02.720 14:36:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.720 14:36:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.720 14:36:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.720 14:36:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.720 14:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:02.720 [2024-12-06 14:36:09.548793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:02.720 [2024-12-06 14:36:09.548921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.979 [2024-12-06 14:36:09.691666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.979 [2024-12-06 14:36:09.824129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:02.979 [2024-12-06 14:36:09.824534] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.979 [2024-12-06 14:36:09.824599] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.979 [2024-12-06 14:36:09.824756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.979 [2024-12-06 14:36:09.824965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.979 [2024-12-06 14:36:09.825261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.979 [2024-12-06 14:36:09.825440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.979 [2024-12-06 14:36:09.825455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.914 14:36:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.914 14:36:10 -- common/autotest_common.sh@862 -- # return 0 00:22:03.914 14:36:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:03.914 14:36:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.914 14:36:10 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 [2024-12-06 14:36:10.663832] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:03.914 14:36:10 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.914 14:36:10 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 Malloc1 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 [2024-12-06 14:36:10.741469] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.914 14:36:10 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 Malloc2 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.914 14:36:10 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 Malloc3 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.914 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.914 14:36:10 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:03.914 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.914 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.915 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.915 14:36:10 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:03.915 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.915 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.915 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.915 14:36:10 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.915 14:36:10 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:03.915 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.915 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.915 Malloc4 00:22:03.915 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.915 14:36:10 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:03.915 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.915 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.173 14:36:10 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 Malloc5 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.173 14:36:10 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 Malloc6 00:22:04.173 14:36:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:10 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:04.173 14:36:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.173 14:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.173 14:36:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.174 14:36:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 Malloc7 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.174 14:36:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 Malloc8 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.174 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.174 14:36:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:04.174 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.174 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.432 14:36:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 Malloc9 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.432 14:36:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 Malloc10 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.432 14:36:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 Malloc11 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:04.432 14:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.432 14:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:04.432 14:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.432 14:36:11 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:04.432 14:36:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.432 14:36:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:04.689 14:36:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:04.689 14:36:11 -- common/autotest_common.sh@1187 -- # local i=0 00:22:04.689 14:36:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.689 14:36:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:04.689 14:36:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:06.622 14:36:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:06.622 14:36:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:06.622 14:36:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:22:06.622 14:36:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:06.622 14:36:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.622 14:36:13 -- common/autotest_common.sh@1197 -- # return 0 00:22:06.622 14:36:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.622 14:36:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:06.906 14:36:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:06.906 14:36:13 -- common/autotest_common.sh@1187 -- # local i=0 00:22:06.906 14:36:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:06.906 14:36:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:06.906 14:36:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:08.806 14:36:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:08.806 14:36:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:08.806 14:36:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:22:08.806 14:36:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:08.806 14:36:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.806 14:36:15 -- common/autotest_common.sh@1197 -- # return 0 00:22:08.806 14:36:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.806 14:36:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:09.064 14:36:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:09.065 14:36:15 -- common/autotest_common.sh@1187 -- # local i=0 00:22:09.065 14:36:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:09.065 14:36:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:09.065 14:36:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:10.963 14:36:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:10.963 14:36:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:10.963 14:36:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:22:10.963 14:36:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:10.963 14:36:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.963 14:36:17 -- common/autotest_common.sh@1197 -- # return 0 00:22:10.963 14:36:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.963 14:36:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:11.228 14:36:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:11.228 14:36:18 -- common/autotest_common.sh@1187 -- # local i=0 00:22:11.228 14:36:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.228 14:36:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:11.228 14:36:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:13.170 14:36:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:13.170 14:36:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:13.170 14:36:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:22:13.170 14:36:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:13.170 14:36:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.170 14:36:20 -- common/autotest_common.sh@1197 -- # return 0 00:22:13.170 14:36:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.170 14:36:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:13.427 14:36:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:13.427 14:36:20 -- common/autotest_common.sh@1187 -- # local i=0 00:22:13.427 14:36:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.427 14:36:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:13.427 14:36:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:15.330 14:36:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:15.330 14:36:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:15.330 14:36:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:22:15.330 14:36:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:15.330 14:36:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.330 14:36:22 -- common/autotest_common.sh@1197 -- # return 0 00:22:15.330 14:36:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.330 14:36:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:15.590 14:36:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:15.590 14:36:22 -- common/autotest_common.sh@1187 -- # local i=0 00:22:15.590 14:36:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.590 14:36:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:15.590 14:36:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:18.123 14:36:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:18.123 14:36:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:18.123 14:36:24 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:22:18.123 14:36:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:18.123 14:36:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.123 14:36:24 -- common/autotest_common.sh@1197 -- # return 0 00:22:18.123 14:36:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.123 14:36:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:18.123 14:36:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:18.123 14:36:24 -- common/autotest_common.sh@1187 -- # local i=0 00:22:18.123 14:36:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.123 14:36:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:18.123 14:36:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:20.025 14:36:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:20.025 14:36:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:20.025 14:36:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:22:20.025 14:36:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:20.025 14:36:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.025 14:36:26 -- common/autotest_common.sh@1197 -- # return 0 00:22:20.025 14:36:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.025 14:36:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:20.025 14:36:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:20.025 14:36:26 -- common/autotest_common.sh@1187 -- # local i=0 00:22:20.025 14:36:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:20.025 14:36:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:20.025 14:36:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:21.929 14:36:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:21.929 14:36:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:21.929 14:36:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:22:22.189 14:36:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:22.189 14:36:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.189 14:36:28 -- common/autotest_common.sh@1197 -- # return 0 00:22:22.189 14:36:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.189 14:36:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:22.189 14:36:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:22.189 14:36:29 -- common/autotest_common.sh@1187 -- # local i=0 00:22:22.189 14:36:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.189 14:36:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:22.189 14:36:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:24.721 14:36:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:24.721 14:36:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:24.721 14:36:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:22:24.721 14:36:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:24.721 14:36:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.721 14:36:31 -- common/autotest_common.sh@1197 -- # return 0 00:22:24.721 14:36:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.721 14:36:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:24.721 14:36:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:24.721 14:36:31 -- common/autotest_common.sh@1187 -- # local i=0 00:22:24.721 14:36:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:24.721 14:36:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:24.721 14:36:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:26.621 14:36:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:26.621 14:36:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:26.621 14:36:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:26.621 14:36:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:26.621 14:36:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.621 14:36:33 -- common/autotest_common.sh@1197 -- # return 0 00:22:26.621 14:36:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.621 14:36:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:26.621 14:36:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:26.621 14:36:33 -- common/autotest_common.sh@1187 -- # local i=0 00:22:26.621 14:36:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:26.621 14:36:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:26.621 14:36:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:29.145 14:36:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:29.145 14:36:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:29.145 14:36:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:29.145 14:36:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:29.145 14:36:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.145 14:36:35 -- common/autotest_common.sh@1197 -- # return 0 00:22:29.145 14:36:35 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:29.145 [global] 00:22:29.145 thread=1 00:22:29.145 invalidate=1 00:22:29.145 rw=read 00:22:29.145 time_based=1 00:22:29.145 runtime=10 00:22:29.145 ioengine=libaio 00:22:29.145 direct=1 00:22:29.145 bs=262144 00:22:29.145 iodepth=64 00:22:29.145 norandommap=1 00:22:29.145 numjobs=1 00:22:29.145 00:22:29.145 [job0] 00:22:29.145 filename=/dev/nvme0n1 00:22:29.145 [job1] 00:22:29.145 filename=/dev/nvme10n1 00:22:29.145 [job2] 00:22:29.145 filename=/dev/nvme1n1 00:22:29.145 [job3] 00:22:29.145 filename=/dev/nvme2n1 00:22:29.145 [job4] 00:22:29.145 filename=/dev/nvme3n1 00:22:29.145 [job5] 00:22:29.145 filename=/dev/nvme4n1 00:22:29.145 [job6] 00:22:29.145 filename=/dev/nvme5n1 00:22:29.145 [job7] 00:22:29.145 filename=/dev/nvme6n1 00:22:29.145 [job8] 00:22:29.145 filename=/dev/nvme7n1 00:22:29.145 [job9] 00:22:29.145 filename=/dev/nvme8n1 00:22:29.145 [job10] 00:22:29.145 filename=/dev/nvme9n1 00:22:29.145 Could not set queue depth (nvme0n1) 00:22:29.145 Could not set queue depth (nvme10n1) 00:22:29.145 Could not set queue depth (nvme1n1) 00:22:29.145 Could not set queue depth (nvme2n1) 00:22:29.145 Could not set queue depth (nvme3n1) 00:22:29.145 Could not set queue depth (nvme4n1) 00:22:29.145 Could not set queue depth (nvme5n1) 00:22:29.145 Could not set queue depth (nvme6n1) 00:22:29.145 Could not set queue depth (nvme7n1) 00:22:29.145 Could not set queue depth (nvme8n1) 00:22:29.145 Could not set queue depth (nvme9n1) 00:22:29.145 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.145 fio-3.35 00:22:29.145 Starting 11 threads 00:22:41.374 00:22:41.374 job0: (groupid=0, jobs=1): err= 0: pid=80930: Fri Dec 6 14:36:46 2024 00:22:41.374 read: IOPS=365, BW=91.5MiB/s (95.9MB/s)(927MiB/10128msec) 00:22:41.374 slat (usec): min=16, max=130815, avg=2386.84, stdev=8286.12 00:22:41.374 clat (usec): min=913, max=311812, avg=171947.72, stdev=49187.25 00:22:41.374 lat (usec): min=1769, max=378474, avg=174334.56, stdev=50248.49 00:22:41.374 clat percentiles (msec): 00:22:41.374 | 1.00th=[ 5], 5.00th=[ 85], 10.00th=[ 117], 20.00th=[ 148], 00:22:41.374 | 30.00th=[ 159], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:22:41.374 | 70.00th=[ 190], 80.00th=[ 201], 90.00th=[ 222], 95.00th=[ 251], 00:22:41.374 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 313], 99.95th=[ 313], 00:22:41.374 | 99.99th=[ 313] 00:22:41.374 bw ( KiB/s): min=74602, max=122368, per=7.80%, avg=93226.20, stdev=13466.02, samples=20 00:22:41.374 iops : min= 291, max= 478, avg=364.10, stdev=52.65, samples=20 00:22:41.374 lat (usec) : 1000=0.03% 00:22:41.374 lat (msec) : 2=0.03%, 4=0.57%, 10=1.54%, 20=0.65%, 50=0.24% 00:22:41.374 lat (msec) : 100=3.10%, 250=88.91%, 500=4.94% 00:22:41.374 cpu : usr=0.19%, sys=1.47%, ctx=754, majf=0, minf=4097 00:22:41.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:41.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.374 issued rwts: total=3706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.374 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.374 job1: (groupid=0, jobs=1): err= 0: pid=80931: Fri Dec 6 14:36:46 2024 00:22:41.374 read: IOPS=395, BW=98.9MiB/s (104MB/s)(1001MiB/10123msec) 00:22:41.374 slat (usec): min=16, max=88804, avg=2061.06, stdev=7494.21 00:22:41.374 clat (msec): min=27, max=345, avg=159.36, stdev=47.65 00:22:41.374 lat (msec): min=28, max=376, avg=161.42, stdev=48.63 00:22:41.374 clat percentiles (msec): 00:22:41.374 | 1.00th=[ 49], 5.00th=[ 85], 10.00th=[ 95], 20.00th=[ 123], 00:22:41.374 | 30.00th=[ 138], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 171], 00:22:41.374 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 211], 95.00th=[ 241], 00:22:41.374 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 338], 00:22:41.374 | 99.99th=[ 347] 00:22:41.374 bw ( KiB/s): min=70144, max=172032, per=8.43%, avg=100829.30, stdev=23469.41, samples=20 00:22:41.374 iops : min= 274, max= 672, avg=393.85, stdev=91.68, samples=20 00:22:41.374 lat (msec) : 50=1.15%, 100=12.49%, 250=82.64%, 500=3.72% 00:22:41.374 cpu : usr=0.17%, sys=1.63%, ctx=740, majf=0, minf=4097 00:22:41.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:41.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.374 issued rwts: total=4003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.374 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.374 job2: (groupid=0, jobs=1): err= 0: pid=80932: Fri Dec 6 14:36:46 2024 00:22:41.374 read: IOPS=399, BW=99.8MiB/s (105MB/s)(1014MiB/10162msec) 00:22:41.374 slat (usec): min=17, max=137106, avg=2120.05, stdev=8384.25 00:22:41.374 clat (msec): min=44, max=417, avg=157.80, stdev=56.13 00:22:41.374 lat (msec): min=45, max=417, avg=159.92, stdev=57.09 00:22:41.374 clat percentiles (msec): 00:22:41.374 | 1.00th=[ 67], 5.00th=[ 81], 10.00th=[ 90], 20.00th=[ 100], 00:22:41.374 | 30.00th=[ 115], 40.00th=[ 144], 50.00th=[ 159], 60.00th=[ 171], 00:22:41.374 | 70.00th=[ 184], 80.00th=[ 203], 90.00th=[ 228], 95.00th=[ 255], 00:22:41.374 | 99.00th=[ 300], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 397], 00:22:41.374 | 99.99th=[ 418] 00:22:41.374 bw ( KiB/s): min=61317, max=177664, per=8.55%, avg=102199.00, stdev=34285.41, samples=20 00:22:41.374 iops : min= 239, max= 694, avg=399.10, stdev=133.88, samples=20 00:22:41.374 lat (msec) : 50=0.39%, 100=20.25%, 250=73.81%, 500=5.55% 00:22:41.374 cpu : usr=0.22%, sys=1.50%, ctx=711, majf=0, minf=4097 00:22:41.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:41.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.374 issued rwts: total=4055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.374 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.374 job3: (groupid=0, jobs=1): err= 0: pid=80933: Fri Dec 6 14:36:46 2024 00:22:41.374 read: IOPS=456, BW=114MiB/s (120MB/s)(1160MiB/10165msec) 00:22:41.374 slat (usec): min=16, max=100503, avg=1911.80, stdev=7315.37 00:22:41.374 clat (usec): min=1431, max=419092, avg=137972.48, stdev=64880.21 00:22:41.375 lat (usec): min=1650, max=419128, avg=139884.27, stdev=65987.58 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 44], 20.00th=[ 85], 00:22:41.375 | 30.00th=[ 109], 40.00th=[ 131], 50.00th=[ 142], 60.00th=[ 153], 00:22:41.375 | 70.00th=[ 171], 80.00th=[ 192], 90.00th=[ 218], 95.00th=[ 243], 00:22:41.375 | 99.00th=[ 284], 99.50th=[ 321], 99.90th=[ 372], 99.95th=[ 418], 00:22:41.375 | 99.99th=[ 418] 00:22:41.375 bw ( KiB/s): min=67584, max=307608, per=9.79%, avg=117112.15, stdev=58333.27, samples=20 00:22:41.375 iops : min= 264, max= 1201, avg=457.40, stdev=227.79, samples=20 00:22:41.375 lat (msec) : 2=0.06%, 4=0.71%, 10=1.79%, 20=3.00%, 50=6.55% 00:22:41.375 lat (msec) : 100=15.41%, 250=69.18%, 500=3.30% 00:22:41.375 cpu : usr=0.17%, sys=1.61%, ctx=947, majf=0, minf=4097 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job4: (groupid=0, jobs=1): err= 0: pid=80934: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=389, BW=97.3MiB/s (102MB/s)(989MiB/10166msec) 00:22:41.375 slat (usec): min=17, max=127975, avg=2180.27, stdev=8803.97 00:22:41.375 clat (usec): min=1854, max=420014, avg=161827.39, stdev=57499.34 00:22:41.375 lat (usec): min=1902, max=420053, avg=164007.66, stdev=58753.86 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 71], 20.00th=[ 136], 00:22:41.375 | 30.00th=[ 148], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 180], 00:22:41.375 | 70.00th=[ 186], 80.00th=[ 199], 90.00th=[ 224], 95.00th=[ 239], 00:22:41.375 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 422], 99.95th=[ 422], 00:22:41.375 | 99.99th=[ 422] 00:22:41.375 bw ( KiB/s): min=64000, max=179712, per=8.33%, avg=99647.05, stdev=24984.07, samples=20 00:22:41.375 iops : min= 250, max= 702, avg=389.20, stdev=97.62, samples=20 00:22:41.375 lat (msec) : 2=0.03%, 4=0.13%, 10=1.77%, 20=0.61%, 50=5.64% 00:22:41.375 lat (msec) : 100=3.26%, 250=85.60%, 500=2.98% 00:22:41.375 cpu : usr=0.12%, sys=1.48%, ctx=816, majf=0, minf=4097 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=3957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job5: (groupid=0, jobs=1): err= 0: pid=80935: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=506, BW=127MiB/s (133MB/s)(1272MiB/10051msec) 00:22:41.375 slat (usec): min=16, max=145635, avg=1790.45, stdev=7176.14 00:22:41.375 clat (msec): min=31, max=427, avg=124.43, stdev=52.30 00:22:41.375 lat (msec): min=37, max=431, avg=126.22, stdev=53.17 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 54], 5.00th=[ 68], 10.00th=[ 79], 20.00th=[ 88], 00:22:41.375 | 30.00th=[ 95], 40.00th=[ 102], 50.00th=[ 109], 60.00th=[ 117], 00:22:41.375 | 70.00th=[ 134], 80.00th=[ 155], 90.00th=[ 199], 95.00th=[ 253], 00:22:41.375 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 330], 00:22:41.375 | 99.99th=[ 426] 00:22:41.375 bw ( KiB/s): min=61317, max=201216, per=10.75%, avg=128511.05, stdev=42944.79, samples=20 00:22:41.375 iops : min= 239, max= 786, avg=501.90, stdev=167.73, samples=20 00:22:41.375 lat (msec) : 50=0.37%, 100=37.89%, 250=56.43%, 500=5.31% 00:22:41.375 cpu : usr=0.10%, sys=2.03%, ctx=941, majf=0, minf=4097 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job6: (groupid=0, jobs=1): err= 0: pid=80936: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=527, BW=132MiB/s (138MB/s)(1336MiB/10127msec) 00:22:41.375 slat (usec): min=17, max=138485, avg=1748.92, stdev=7364.01 00:22:41.375 clat (msec): min=9, max=302, avg=119.33, stdev=58.70 00:22:41.375 lat (msec): min=9, max=302, avg=121.08, stdev=59.83 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 41], 20.00th=[ 53], 00:22:41.375 | 30.00th=[ 70], 40.00th=[ 107], 50.00th=[ 134], 60.00th=[ 148], 00:22:41.375 | 70.00th=[ 159], 80.00th=[ 171], 90.00th=[ 190], 95.00th=[ 205], 00:22:41.375 | 99.00th=[ 241], 99.50th=[ 266], 99.90th=[ 305], 99.95th=[ 305], 00:22:41.375 | 99.99th=[ 305] 00:22:41.375 bw ( KiB/s): min=71168, max=342016, per=11.30%, avg=135150.85, stdev=75016.23, samples=20 00:22:41.375 iops : min= 278, max= 1336, avg=527.90, stdev=292.98, samples=20 00:22:41.375 lat (msec) : 10=0.09%, 20=1.22%, 50=16.37%, 100=21.48%, 250=60.09% 00:22:41.375 lat (msec) : 500=0.75% 00:22:41.375 cpu : usr=0.24%, sys=1.79%, ctx=1199, majf=0, minf=4098 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=5345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job7: (groupid=0, jobs=1): err= 0: pid=80938: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=403, BW=101MiB/s (106MB/s)(1013MiB/10049msec) 00:22:41.375 slat (usec): min=19, max=88905, avg=2383.63, stdev=7717.37 00:22:41.375 clat (msec): min=33, max=352, avg=156.02, stdev=54.90 00:22:41.375 lat (msec): min=57, max=360, avg=158.41, stdev=55.89 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 72], 5.00th=[ 83], 10.00th=[ 92], 20.00th=[ 103], 00:22:41.375 | 30.00th=[ 114], 40.00th=[ 128], 50.00th=[ 153], 60.00th=[ 180], 00:22:41.375 | 70.00th=[ 190], 80.00th=[ 203], 90.00th=[ 226], 95.00th=[ 251], 00:22:41.375 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 342], 00:22:41.375 | 99.99th=[ 355] 00:22:41.375 bw ( KiB/s): min=46499, max=169984, per=8.53%, avg=102044.80, stdev=35039.86, samples=20 00:22:41.375 iops : min= 181, max= 664, avg=398.50, stdev=136.97, samples=20 00:22:41.375 lat (msec) : 50=0.02%, 100=17.82%, 250=76.97%, 500=5.18% 00:22:41.375 cpu : usr=0.19%, sys=1.60%, ctx=722, majf=0, minf=4097 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=4051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job8: (groupid=0, jobs=1): err= 0: pid=80944: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=441, BW=110MiB/s (116MB/s)(1122MiB/10163msec) 00:22:41.375 slat (usec): min=15, max=84174, avg=1929.85, stdev=7273.90 00:22:41.375 clat (msec): min=61, max=381, avg=142.64, stdev=60.85 00:22:41.375 lat (msec): min=61, max=381, avg=144.57, stdev=61.60 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 71], 5.00th=[ 81], 10.00th=[ 88], 20.00th=[ 95], 00:22:41.375 | 30.00th=[ 103], 40.00th=[ 113], 50.00th=[ 126], 60.00th=[ 138], 00:22:41.375 | 70.00th=[ 150], 80.00th=[ 182], 90.00th=[ 241], 95.00th=[ 271], 00:22:41.375 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 380], 99.95th=[ 380], 00:22:41.375 | 99.99th=[ 380] 00:22:41.375 bw ( KiB/s): min=51200, max=178842, per=9.47%, avg=113238.35, stdev=39655.73, samples=20 00:22:41.375 iops : min= 200, max= 698, avg=442.25, stdev=154.85, samples=20 00:22:41.375 lat (msec) : 100=27.06%, 250=64.70%, 500=8.25% 00:22:41.375 cpu : usr=0.21%, sys=1.62%, ctx=863, majf=0, minf=4097 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=4487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job9: (groupid=0, jobs=1): err= 0: pid=80945: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=353, BW=88.3MiB/s (92.6MB/s)(894MiB/10125msec) 00:22:41.375 slat (usec): min=16, max=98841, avg=2495.41, stdev=8738.38 00:22:41.375 clat (msec): min=41, max=340, avg=178.15, stdev=52.72 00:22:41.375 lat (msec): min=42, max=340, avg=180.64, stdev=53.66 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 51], 5.00th=[ 69], 10.00th=[ 87], 20.00th=[ 157], 00:22:41.375 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:22:41.375 | 70.00th=[ 199], 80.00th=[ 211], 90.00th=[ 234], 95.00th=[ 257], 00:22:41.375 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], 00:22:41.375 | 99.99th=[ 342] 00:22:41.375 bw ( KiB/s): min=46499, max=190976, per=7.52%, avg=89900.40, stdev=27070.76, samples=20 00:22:41.375 iops : min= 181, max= 746, avg=351.10, stdev=105.81, samples=20 00:22:41.375 lat (msec) : 50=1.01%, 100=11.21%, 250=81.71%, 500=6.07% 00:22:41.375 cpu : usr=0.13%, sys=1.32%, ctx=737, majf=0, minf=4097 00:22:41.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:22:41.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.375 issued rwts: total=3576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.375 job10: (groupid=0, jobs=1): err= 0: pid=80946: Fri Dec 6 14:36:46 2024 00:22:41.375 read: IOPS=451, BW=113MiB/s (118MB/s)(1146MiB/10159msec) 00:22:41.375 slat (usec): min=17, max=101388, avg=2143.50, stdev=7254.93 00:22:41.375 clat (msec): min=44, max=334, avg=139.16, stdev=47.04 00:22:41.375 lat (msec): min=44, max=351, avg=141.30, stdev=47.88 00:22:41.375 clat percentiles (msec): 00:22:41.375 | 1.00th=[ 73], 5.00th=[ 86], 10.00th=[ 92], 20.00th=[ 100], 00:22:41.375 | 30.00th=[ 106], 40.00th=[ 113], 50.00th=[ 126], 60.00th=[ 150], 00:22:41.375 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 203], 95.00th=[ 228], 00:22:41.375 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:22:41.375 | 99.99th=[ 334] 00:22:41.375 bw ( KiB/s): min=64512, max=173568, per=9.68%, avg=115735.20, stdev=33221.21, samples=20 00:22:41.375 iops : min= 252, max= 678, avg=452.05, stdev=129.79, samples=20 00:22:41.375 lat (msec) : 50=0.15%, 100=21.96%, 250=75.20%, 500=2.68% 00:22:41.376 cpu : usr=0.17%, sys=1.79%, ctx=692, majf=0, minf=4097 00:22:41.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:41.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.376 issued rwts: total=4585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.376 00:22:41.376 Run status group 0 (all jobs): 00:22:41.376 READ: bw=1168MiB/s (1225MB/s), 88.3MiB/s-132MiB/s (92.6MB/s-138MB/s), io=11.6GiB (12.4GB), run=10049-10166msec 00:22:41.376 00:22:41.376 Disk stats (read/write): 00:22:41.376 nvme0n1: ios=7285/0, merge=0/0, ticks=1230891/0, in_queue=1230891, util=96.67% 00:22:41.376 nvme10n1: ios=7835/0, merge=0/0, ticks=1235032/0, in_queue=1235032, util=97.07% 00:22:41.376 nvme1n1: ios=7952/0, merge=0/0, ticks=1227914/0, in_queue=1227914, util=97.04% 00:22:41.376 nvme2n1: ios=9132/0, merge=0/0, ticks=1226326/0, in_queue=1226326, util=97.25% 00:22:41.376 nvme3n1: ios=7786/0, merge=0/0, ticks=1226915/0, in_queue=1226915, util=97.09% 00:22:41.376 nvme4n1: ios=10017/0, merge=0/0, ticks=1238027/0, in_queue=1238027, util=97.68% 00:22:41.376 nvme5n1: ios=10562/0, merge=0/0, ticks=1228346/0, in_queue=1228346, util=97.78% 00:22:41.376 nvme6n1: ios=7953/0, merge=0/0, ticks=1235873/0, in_queue=1235873, util=97.31% 00:22:41.376 nvme7n1: ios=8846/0, merge=0/0, ticks=1227222/0, in_queue=1227222, util=97.99% 00:22:41.376 nvme8n1: ios=7025/0, merge=0/0, ticks=1231771/0, in_queue=1231771, util=98.00% 00:22:41.376 nvme9n1: ios=9031/0, merge=0/0, ticks=1224341/0, in_queue=1224341, util=97.91% 00:22:41.376 14:36:46 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:41.376 [global] 00:22:41.376 thread=1 00:22:41.376 invalidate=1 00:22:41.376 rw=randwrite 00:22:41.376 time_based=1 00:22:41.376 runtime=10 00:22:41.376 ioengine=libaio 00:22:41.376 direct=1 00:22:41.376 bs=262144 00:22:41.376 iodepth=64 00:22:41.376 norandommap=1 00:22:41.376 numjobs=1 00:22:41.376 00:22:41.376 [job0] 00:22:41.376 filename=/dev/nvme0n1 00:22:41.376 [job1] 00:22:41.376 filename=/dev/nvme10n1 00:22:41.376 [job2] 00:22:41.376 filename=/dev/nvme1n1 00:22:41.376 [job3] 00:22:41.376 filename=/dev/nvme2n1 00:22:41.376 [job4] 00:22:41.376 filename=/dev/nvme3n1 00:22:41.376 [job5] 00:22:41.376 filename=/dev/nvme4n1 00:22:41.376 [job6] 00:22:41.376 filename=/dev/nvme5n1 00:22:41.376 [job7] 00:22:41.376 filename=/dev/nvme6n1 00:22:41.376 [job8] 00:22:41.376 filename=/dev/nvme7n1 00:22:41.376 [job9] 00:22:41.376 filename=/dev/nvme8n1 00:22:41.376 [job10] 00:22:41.376 filename=/dev/nvme9n1 00:22:41.376 Could not set queue depth (nvme0n1) 00:22:41.376 Could not set queue depth (nvme10n1) 00:22:41.376 Could not set queue depth (nvme1n1) 00:22:41.376 Could not set queue depth (nvme2n1) 00:22:41.376 Could not set queue depth (nvme3n1) 00:22:41.376 Could not set queue depth (nvme4n1) 00:22:41.376 Could not set queue depth (nvme5n1) 00:22:41.376 Could not set queue depth (nvme6n1) 00:22:41.376 Could not set queue depth (nvme7n1) 00:22:41.376 Could not set queue depth (nvme8n1) 00:22:41.376 Could not set queue depth (nvme9n1) 00:22:41.376 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.376 fio-3.35 00:22:41.376 Starting 11 threads 00:22:51.356 00:22:51.356 job0: (groupid=0, jobs=1): err= 0: pid=81140: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=477, BW=119MiB/s (125MB/s)(1206MiB/10107msec); 0 zone resets 00:22:51.356 slat (usec): min=19, max=16869, avg=2024.33, stdev=3545.40 00:22:51.356 clat (msec): min=15, max=235, avg=132.08, stdev=19.44 00:22:51.356 lat (msec): min=15, max=237, avg=134.10, stdev=19.41 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 114], 00:22:51.356 | 30.00th=[ 120], 40.00th=[ 131], 50.00th=[ 134], 60.00th=[ 138], 00:22:51.356 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 159], 00:22:51.356 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 228], 99.95th=[ 232], 00:22:51.356 | 99.99th=[ 236] 00:22:51.356 bw ( KiB/s): min=93184, max=145920, per=11.34%, avg=121804.10, stdev=14426.62, samples=20 00:22:51.356 iops : min= 364, max= 570, avg=475.75, stdev=56.29, samples=20 00:22:51.356 lat (msec) : 20=0.06%, 50=0.23%, 100=0.73%, 250=98.98% 00:22:51.356 cpu : usr=0.83%, sys=1.28%, ctx=6441, majf=0, minf=1 00:22:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.356 issued rwts: total=0,4822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.356 job1: (groupid=0, jobs=1): err= 0: pid=81141: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=519, BW=130MiB/s (136MB/s)(1311MiB/10097msec); 0 zone resets 00:22:51.356 slat (usec): min=20, max=63945, avg=1901.32, stdev=3367.36 00:22:51.356 clat (msec): min=21, max=273, avg=121.28, stdev=21.86 00:22:51.356 lat (msec): min=21, max=273, avg=123.18, stdev=21.90 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 99], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 107], 00:22:51.356 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:22:51.356 | 70.00th=[ 123], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 157], 00:22:51.356 | 99.00th=[ 213], 99.50th=[ 253], 99.90th=[ 266], 99.95th=[ 266], 00:22:51.356 | 99.99th=[ 275] 00:22:51.356 bw ( KiB/s): min=84992, max=155648, per=12.35%, avg=132608.00, stdev=18913.75, samples=20 00:22:51.356 iops : min= 332, max= 608, avg=518.00, stdev=73.88, samples=20 00:22:51.356 lat (msec) : 50=0.19%, 100=2.98%, 250=96.24%, 500=0.59% 00:22:51.356 cpu : usr=1.37%, sys=1.49%, ctx=6776, majf=0, minf=1 00:22:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.356 issued rwts: total=0,5243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.356 job2: (groupid=0, jobs=1): err= 0: pid=81153: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=375, BW=93.9MiB/s (98.4MB/s)(950MiB/10117msec); 0 zone resets 00:22:51.356 slat (usec): min=16, max=42671, avg=2488.85, stdev=5173.73 00:22:51.356 clat (msec): min=5, max=370, avg=167.93, stdev=82.15 00:22:51.356 lat (msec): min=6, max=370, avg=170.42, stdev=83.32 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 26], 5.00th=[ 106], 10.00th=[ 108], 20.00th=[ 113], 00:22:51.356 | 30.00th=[ 115], 40.00th=[ 125], 50.00th=[ 134], 60.00th=[ 140], 00:22:51.356 | 70.00th=[ 155], 80.00th=[ 284], 90.00th=[ 309], 95.00th=[ 330], 00:22:51.356 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 372], 00:22:51.356 | 99.99th=[ 372] 00:22:51.356 bw ( KiB/s): min=47616, max=145920, per=8.90%, avg=95601.65, stdev=38590.05, samples=20 00:22:51.356 iops : min= 186, max= 570, avg=373.40, stdev=150.69, samples=20 00:22:51.356 lat (msec) : 10=0.16%, 20=0.45%, 50=1.74%, 100=1.92%, 250=71.72% 00:22:51.356 lat (msec) : 500=24.01% 00:22:51.356 cpu : usr=0.99%, sys=1.13%, ctx=4796, majf=0, minf=1 00:22:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:22:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.356 issued rwts: total=0,3798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.356 job3: (groupid=0, jobs=1): err= 0: pid=81154: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(754MiB/10165msec); 0 zone resets 00:22:51.356 slat (usec): min=16, max=28307, avg=3237.70, stdev=6159.98 00:22:51.356 clat (msec): min=5, max=335, avg=212.41, stdev=72.39 00:22:51.356 lat (msec): min=5, max=335, avg=215.65, stdev=73.30 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 37], 5.00th=[ 55], 10.00th=[ 67], 20.00th=[ 174], 00:22:51.356 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 218], 60.00th=[ 230], 00:22:51.356 | 70.00th=[ 266], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 309], 00:22:51.356 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 338], 00:22:51.356 | 99.99th=[ 338] 00:22:51.356 bw ( KiB/s): min=51200, max=189952, per=7.04%, avg=75550.55, stdev=30635.37, samples=20 00:22:51.356 iops : min= 200, max= 742, avg=295.10, stdev=119.66, samples=20 00:22:51.356 lat (msec) : 10=0.07%, 20=0.27%, 50=1.23%, 100=8.92%, 250=57.18% 00:22:51.356 lat (msec) : 500=32.34% 00:22:51.356 cpu : usr=1.13%, sys=0.93%, ctx=1439, majf=0, minf=1 00:22:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:22:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.356 issued rwts: total=0,3015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.356 job4: (groupid=0, jobs=1): err= 0: pid=81155: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=271, BW=67.8MiB/s (71.1MB/s)(689MiB/10157msec); 0 zone resets 00:22:51.356 slat (usec): min=26, max=57294, avg=3512.36, stdev=6531.01 00:22:51.356 clat (msec): min=11, max=346, avg=232.29, stdev=55.60 00:22:51.356 lat (msec): min=11, max=348, avg=235.81, stdev=56.08 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 101], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 184], 00:22:51.356 | 30.00th=[ 190], 40.00th=[ 211], 50.00th=[ 226], 60.00th=[ 239], 00:22:51.356 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 309], 95.00th=[ 326], 00:22:51.356 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:22:51.356 | 99.99th=[ 347] 00:22:51.356 bw ( KiB/s): min=51200, max=90112, per=6.42%, avg=68909.25, stdev=14378.04, samples=20 00:22:51.356 iops : min= 200, max= 352, avg=269.15, stdev=56.18, samples=20 00:22:51.356 lat (msec) : 20=0.22%, 50=0.76%, 250=62.03%, 500=36.99% 00:22:51.356 cpu : usr=1.07%, sys=0.79%, ctx=3037, majf=0, minf=1 00:22:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.356 issued rwts: total=0,2755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.356 job5: (groupid=0, jobs=1): err= 0: pid=81156: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=266, BW=66.5MiB/s (69.8MB/s)(676MiB/10158msec); 0 zone resets 00:22:51.356 slat (usec): min=24, max=42374, avg=3616.80, stdev=6772.68 00:22:51.356 clat (msec): min=17, max=353, avg=236.66, stdev=58.84 00:22:51.356 lat (msec): min=18, max=353, avg=240.28, stdev=59.49 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 97], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 186], 00:22:51.356 | 30.00th=[ 190], 40.00th=[ 211], 50.00th=[ 230], 60.00th=[ 247], 00:22:51.356 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 321], 95.00th=[ 330], 00:22:51.356 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:22:51.356 | 99.99th=[ 355] 00:22:51.356 bw ( KiB/s): min=45056, max=90112, per=6.30%, avg=67606.10, stdev=15250.87, samples=20 00:22:51.356 iops : min= 176, max= 352, avg=264.05, stdev=59.57, samples=20 00:22:51.356 lat (msec) : 20=0.07%, 50=0.41%, 100=0.59%, 250=61.58%, 500=37.35% 00:22:51.356 cpu : usr=0.89%, sys=0.94%, ctx=2134, majf=0, minf=1 00:22:51.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:51.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.356 issued rwts: total=0,2704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.356 job6: (groupid=0, jobs=1): err= 0: pid=81157: Fri Dec 6 14:36:57 2024 00:22:51.356 write: IOPS=520, BW=130MiB/s (136MB/s)(1313MiB/10093msec); 0 zone resets 00:22:51.356 slat (usec): min=17, max=47871, avg=1900.91, stdev=3314.73 00:22:51.356 clat (msec): min=35, max=264, avg=121.09, stdev=19.60 00:22:51.356 lat (msec): min=35, max=264, avg=122.99, stdev=19.63 00:22:51.356 clat percentiles (msec): 00:22:51.356 | 1.00th=[ 100], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 108], 00:22:51.356 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:22:51.356 | 70.00th=[ 124], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 157], 00:22:51.356 | 99.00th=[ 199], 99.50th=[ 207], 99.90th=[ 251], 99.95th=[ 251], 00:22:51.356 | 99.99th=[ 266] 00:22:51.356 bw ( KiB/s): min=88576, max=156160, per=12.37%, avg=132775.90, stdev=18543.42, samples=20 00:22:51.357 iops : min= 346, max= 610, avg=518.65, stdev=72.44, samples=20 00:22:51.357 lat (msec) : 50=0.10%, 100=3.01%, 250=96.78%, 500=0.11% 00:22:51.357 cpu : usr=1.78%, sys=1.41%, ctx=6923, majf=0, minf=1 00:22:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.357 issued rwts: total=0,5250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.357 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.357 job7: (groupid=0, jobs=1): err= 0: pid=81158: Fri Dec 6 14:36:57 2024 00:22:51.357 write: IOPS=263, BW=65.9MiB/s (69.1MB/s)(669MiB/10151msec); 0 zone resets 00:22:51.357 slat (usec): min=18, max=69171, avg=3731.58, stdev=7112.90 00:22:51.357 clat (msec): min=39, max=375, avg=238.93, stdev=64.49 00:22:51.357 lat (msec): min=39, max=375, avg=242.66, stdev=65.13 00:22:51.357 clat percentiles (msec): 00:22:51.357 | 1.00th=[ 63], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:22:51.357 | 30.00th=[ 194], 40.00th=[ 211], 50.00th=[ 230], 60.00th=[ 241], 00:22:51.357 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 334], 95.00th=[ 359], 00:22:51.357 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:22:51.357 | 99.99th=[ 376] 00:22:51.357 bw ( KiB/s): min=45056, max=92160, per=6.23%, avg=66892.80, stdev=15602.65, samples=20 00:22:51.357 iops : min= 176, max= 360, avg=261.30, stdev=60.95, samples=20 00:22:51.357 lat (msec) : 50=0.45%, 100=1.79%, 250=61.70%, 500=36.06% 00:22:51.357 cpu : usr=0.83%, sys=0.98%, ctx=1178, majf=0, minf=1 00:22:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:22:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.357 issued rwts: total=0,2676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.357 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.357 job8: (groupid=0, jobs=1): err= 0: pid=81159: Fri Dec 6 14:36:57 2024 00:22:51.357 write: IOPS=271, BW=67.8MiB/s (71.1MB/s)(689MiB/10152msec); 0 zone resets 00:22:51.357 slat (usec): min=22, max=35981, avg=3564.37, stdev=6572.47 00:22:51.357 clat (msec): min=42, max=363, avg=232.12, stdev=56.15 00:22:51.357 lat (msec): min=42, max=363, avg=235.68, stdev=56.73 00:22:51.357 clat percentiles (msec): 00:22:51.357 | 1.00th=[ 107], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 184], 00:22:51.357 | 30.00th=[ 188], 40.00th=[ 203], 50.00th=[ 220], 60.00th=[ 228], 00:22:51.357 | 70.00th=[ 279], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 330], 00:22:51.357 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:22:51.357 | 99.99th=[ 363] 00:22:51.357 bw ( KiB/s): min=47104, max=92160, per=6.42%, avg=68921.15, stdev=15247.40, samples=20 00:22:51.357 iops : min= 184, max= 360, avg=269.20, stdev=59.58, samples=20 00:22:51.357 lat (msec) : 50=0.15%, 100=0.73%, 250=63.52%, 500=35.61% 00:22:51.357 cpu : usr=1.12%, sys=0.71%, ctx=3261, majf=0, minf=1 00:22:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.357 issued rwts: total=0,2755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.357 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.357 job9: (groupid=0, jobs=1): err= 0: pid=81160: Fri Dec 6 14:36:57 2024 00:22:51.357 write: IOPS=473, BW=118MiB/s (124MB/s)(1195MiB/10106msec); 0 zone resets 00:22:51.357 slat (usec): min=19, max=19569, avg=2086.92, stdev=3619.00 00:22:51.357 clat (msec): min=18, max=230, avg=133.14, stdev=20.74 00:22:51.357 lat (msec): min=18, max=230, avg=135.22, stdev=20.75 00:22:51.357 clat percentiles (msec): 00:22:51.357 | 1.00th=[ 106], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 115], 00:22:51.357 | 30.00th=[ 120], 40.00th=[ 131], 50.00th=[ 136], 60.00th=[ 138], 00:22:51.357 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 165], 00:22:51.357 | 99.00th=[ 218], 99.50th=[ 224], 99.90th=[ 228], 99.95th=[ 230], 00:22:51.357 | 99.99th=[ 230] 00:22:51.357 bw ( KiB/s): min=71536, max=145408, per=11.25%, avg=120747.80, stdev=17173.19, samples=20 00:22:51.357 iops : min= 279, max= 568, avg=471.60, stdev=67.09, samples=20 00:22:51.357 lat (msec) : 20=0.08%, 50=0.08%, 100=0.33%, 250=99.50% 00:22:51.357 cpu : usr=0.89%, sys=1.18%, ctx=6846, majf=0, minf=1 00:22:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.357 issued rwts: total=0,4781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.357 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.357 job10: (groupid=0, jobs=1): err= 0: pid=81161: Fri Dec 6 14:36:57 2024 00:22:51.357 write: IOPS=478, BW=120MiB/s (126MB/s)(1209MiB/10099msec); 0 zone resets 00:22:51.357 slat (usec): min=20, max=29875, avg=2029.07, stdev=3555.46 00:22:51.357 clat (msec): min=10, max=252, avg=131.53, stdev=22.77 00:22:51.357 lat (msec): min=13, max=252, avg=133.56, stdev=22.87 00:22:51.357 clat percentiles (msec): 00:22:51.357 | 1.00th=[ 67], 5.00th=[ 107], 10.00th=[ 112], 20.00th=[ 114], 00:22:51.357 | 30.00th=[ 118], 40.00th=[ 129], 50.00th=[ 134], 60.00th=[ 138], 00:22:51.357 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 153], 00:22:51.357 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 253], 00:22:51.357 | 99.99th=[ 253] 00:22:51.357 bw ( KiB/s): min=69771, max=145408, per=11.38%, avg=122221.35, stdev=17237.76, samples=20 00:22:51.357 iops : min= 272, max= 568, avg=477.40, stdev=67.42, samples=20 00:22:51.357 lat (msec) : 20=0.08%, 50=0.56%, 100=1.22%, 250=97.95%, 500=0.19% 00:22:51.357 cpu : usr=1.84%, sys=1.27%, ctx=5897, majf=0, minf=1 00:22:51.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:51.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.357 issued rwts: total=0,4837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.357 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.357 00:22:51.357 Run status group 0 (all jobs): 00:22:51.357 WRITE: bw=1049MiB/s (1100MB/s), 65.9MiB/s-130MiB/s (69.1MB/s-136MB/s), io=10.4GiB (11.2GB), run=10093-10165msec 00:22:51.357 00:22:51.357 Disk stats (read/write): 00:22:51.357 nvme0n1: ios=50/9477, merge=0/0, ticks=36/1212333, in_queue=1212369, util=97.63% 00:22:51.357 nvme10n1: ios=49/10307, merge=0/0, ticks=58/1206923, in_queue=1206981, util=97.64% 00:22:51.357 nvme1n1: ios=40/7449, merge=0/0, ticks=36/1213525, in_queue=1213561, util=98.04% 00:22:51.357 nvme2n1: ios=15/5868, merge=0/0, ticks=28/1207084, in_queue=1207112, util=97.77% 00:22:51.357 nvme3n1: ios=0/5338, merge=0/0, ticks=0/1204729, in_queue=1204729, util=97.66% 00:22:51.357 nvme4n1: ios=0/5247, merge=0/0, ticks=0/1205492, in_queue=1205492, util=98.05% 00:22:51.357 nvme5n1: ios=0/10308, merge=0/0, ticks=0/1206116, in_queue=1206116, util=98.03% 00:22:51.357 nvme6n1: ios=0/5178, merge=0/0, ticks=0/1202622, in_queue=1202622, util=98.03% 00:22:51.357 nvme7n1: ios=0/5344, merge=0/0, ticks=0/1204886, in_queue=1204886, util=98.41% 00:22:51.357 nvme8n1: ios=0/9394, merge=0/0, ticks=0/1211242, in_queue=1211242, util=98.75% 00:22:51.357 nvme9n1: ios=0/9480, merge=0/0, ticks=0/1208850, in_queue=1208850, util=98.65% 00:22:51.357 14:36:57 -- target/multiconnection.sh@36 -- # sync 00:22:51.357 14:36:57 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:51.357 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.357 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:51.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:51.357 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:51.357 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.357 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:51.357 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.357 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.357 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:51.357 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.357 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.357 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.357 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.357 14:36:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.357 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.357 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:51.357 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:51.357 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:51.357 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.357 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.357 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:51.357 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.357 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:51.357 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.357 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:51.357 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.357 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.357 14:36:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.357 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.357 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:51.357 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:51.357 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:51.357 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.357 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.357 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:51.357 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.357 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:51.357 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.357 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:51.357 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.357 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.357 14:36:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.357 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.357 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:51.358 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:51.358 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:51.358 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:51.358 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:51.358 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:51.358 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:51.358 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:51.358 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:51.358 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:51.358 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:51.358 14:36:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:51.358 14:36:57 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:51.358 14:36:57 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:57 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:51.358 14:36:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:51.358 14:36:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:51.358 14:36:58 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:51.358 14:36:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:51.358 14:36:58 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:51.358 14:36:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:58 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:51.358 14:36:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:51.358 14:36:58 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:51.358 14:36:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:51.358 14:36:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:51.358 14:36:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:58 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:51.358 14:36:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:51.358 14:36:58 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:51.358 14:36:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:51.358 14:36:58 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.358 14:36:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:51.358 14:36:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.358 14:36:58 -- common/autotest_common.sh@10 -- # set +x 00:22:51.358 14:36:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.358 14:36:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.358 14:36:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:51.358 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:51.358 14:36:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:51.358 14:36:58 -- common/autotest_common.sh@1208 -- # local i=0 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:51.358 14:36:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:51.617 14:36:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:51.617 14:36:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:51.617 14:36:58 -- common/autotest_common.sh@1220 -- # return 0 00:22:51.617 14:36:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:51.617 14:36:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.617 14:36:58 -- common/autotest_common.sh@10 -- # set +x 00:22:51.617 14:36:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.617 14:36:58 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:51.617 14:36:58 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:51.617 14:36:58 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:51.617 14:36:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:51.617 14:36:58 -- nvmf/common.sh@116 -- # sync 00:22:51.617 14:36:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:51.617 14:36:58 -- nvmf/common.sh@119 -- # set +e 00:22:51.617 14:36:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:51.617 14:36:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:51.617 rmmod nvme_tcp 00:22:51.617 rmmod nvme_fabrics 00:22:51.617 rmmod nvme_keyring 00:22:51.617 14:36:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:51.617 14:36:58 -- nvmf/common.sh@123 -- # set -e 00:22:51.617 14:36:58 -- nvmf/common.sh@124 -- # return 0 00:22:51.617 14:36:58 -- nvmf/common.sh@477 -- # '[' -n 80458 ']' 00:22:51.617 14:36:58 -- nvmf/common.sh@478 -- # killprocess 80458 00:22:51.617 14:36:58 -- common/autotest_common.sh@936 -- # '[' -z 80458 ']' 00:22:51.617 14:36:58 -- common/autotest_common.sh@940 -- # kill -0 80458 00:22:51.617 14:36:58 -- common/autotest_common.sh@941 -- # uname 00:22:51.617 14:36:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.617 14:36:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80458 00:22:51.617 killing process with pid 80458 00:22:51.617 14:36:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.617 14:36:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.617 14:36:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80458' 00:22:51.617 14:36:58 -- common/autotest_common.sh@955 -- # kill 80458 00:22:51.617 14:36:58 -- common/autotest_common.sh@960 -- # wait 80458 00:22:52.185 14:36:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:52.185 14:36:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:52.185 14:36:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:52.185 14:36:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.185 14:36:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:52.185 14:36:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.185 14:36:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.185 14:36:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.185 14:36:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:52.185 00:22:52.185 real 0m50.136s 00:22:52.185 user 2m52.756s 00:22:52.185 sys 0m21.340s 00:22:52.185 14:36:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:52.185 14:36:59 -- common/autotest_common.sh@10 -- # set +x 00:22:52.185 ************************************ 00:22:52.185 END TEST nvmf_multiconnection 00:22:52.185 ************************************ 00:22:52.185 14:36:59 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:52.185 14:36:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:52.185 14:36:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:52.185 14:36:59 -- common/autotest_common.sh@10 -- # set +x 00:22:52.185 ************************************ 00:22:52.185 START TEST nvmf_initiator_timeout 00:22:52.185 ************************************ 00:22:52.185 14:36:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:52.444 * Looking for test storage... 00:22:52.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:52.444 14:36:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:52.444 14:36:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:52.444 14:36:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:52.444 14:36:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:52.444 14:36:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:52.444 14:36:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:52.444 14:36:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:52.444 14:36:59 -- scripts/common.sh@335 -- # IFS=.-: 00:22:52.444 14:36:59 -- scripts/common.sh@335 -- # read -ra ver1 00:22:52.444 14:36:59 -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.444 14:36:59 -- scripts/common.sh@336 -- # read -ra ver2 00:22:52.444 14:36:59 -- scripts/common.sh@337 -- # local 'op=<' 00:22:52.444 14:36:59 -- scripts/common.sh@339 -- # ver1_l=2 00:22:52.444 14:36:59 -- scripts/common.sh@340 -- # ver2_l=1 00:22:52.444 14:36:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:52.444 14:36:59 -- scripts/common.sh@343 -- # case "$op" in 00:22:52.444 14:36:59 -- scripts/common.sh@344 -- # : 1 00:22:52.444 14:36:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:52.444 14:36:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.444 14:36:59 -- scripts/common.sh@364 -- # decimal 1 00:22:52.444 14:36:59 -- scripts/common.sh@352 -- # local d=1 00:22:52.444 14:36:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.444 14:36:59 -- scripts/common.sh@354 -- # echo 1 00:22:52.444 14:36:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:52.444 14:36:59 -- scripts/common.sh@365 -- # decimal 2 00:22:52.444 14:36:59 -- scripts/common.sh@352 -- # local d=2 00:22:52.444 14:36:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.444 14:36:59 -- scripts/common.sh@354 -- # echo 2 00:22:52.444 14:36:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:52.444 14:36:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:52.444 14:36:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:52.444 14:36:59 -- scripts/common.sh@367 -- # return 0 00:22:52.444 14:36:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.444 14:36:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.444 --rc genhtml_branch_coverage=1 00:22:52.444 --rc genhtml_function_coverage=1 00:22:52.444 --rc genhtml_legend=1 00:22:52.444 --rc geninfo_all_blocks=1 00:22:52.444 --rc geninfo_unexecuted_blocks=1 00:22:52.444 00:22:52.444 ' 00:22:52.444 14:36:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.444 --rc genhtml_branch_coverage=1 00:22:52.444 --rc genhtml_function_coverage=1 00:22:52.444 --rc genhtml_legend=1 00:22:52.444 --rc geninfo_all_blocks=1 00:22:52.444 --rc geninfo_unexecuted_blocks=1 00:22:52.444 00:22:52.444 ' 00:22:52.444 14:36:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.444 --rc genhtml_branch_coverage=1 00:22:52.444 --rc genhtml_function_coverage=1 00:22:52.444 --rc genhtml_legend=1 00:22:52.444 --rc geninfo_all_blocks=1 00:22:52.444 --rc geninfo_unexecuted_blocks=1 00:22:52.444 00:22:52.444 ' 00:22:52.444 14:36:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:52.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.444 --rc genhtml_branch_coverage=1 00:22:52.444 --rc genhtml_function_coverage=1 00:22:52.444 --rc genhtml_legend=1 00:22:52.444 --rc geninfo_all_blocks=1 00:22:52.444 --rc geninfo_unexecuted_blocks=1 00:22:52.444 00:22:52.444 ' 00:22:52.444 14:36:59 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.444 14:36:59 -- nvmf/common.sh@7 -- # uname -s 00:22:52.444 14:36:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.444 14:36:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.444 14:36:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.444 14:36:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.444 14:36:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.444 14:36:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.444 14:36:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.444 14:36:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.444 14:36:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.444 14:36:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.444 14:36:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:22:52.444 14:36:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:22:52.444 14:36:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.444 14:36:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.444 14:36:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.444 14:36:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.444 14:36:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.444 14:36:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.444 14:36:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.444 14:36:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.444 14:36:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.444 14:36:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.444 14:36:59 -- paths/export.sh@5 -- # export PATH 00:22:52.444 14:36:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.444 14:36:59 -- nvmf/common.sh@46 -- # : 0 00:22:52.444 14:36:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:52.444 14:36:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:52.444 14:36:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:52.444 14:36:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.445 14:36:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.445 14:36:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:52.445 14:36:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:52.445 14:36:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:52.445 14:36:59 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.445 14:36:59 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.445 14:36:59 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:52.445 14:36:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:52.445 14:36:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.445 14:36:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:52.445 14:36:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:52.445 14:36:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:52.445 14:36:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.445 14:36:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.445 14:36:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.445 14:36:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:52.445 14:36:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:52.445 14:36:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:52.445 14:36:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:52.445 14:36:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:52.445 14:36:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:52.445 14:36:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.445 14:36:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.445 14:36:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:52.445 14:36:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:52.445 14:36:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.445 14:36:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.445 14:36:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.445 14:36:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.445 14:36:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.445 14:36:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.445 14:36:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.445 14:36:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.445 14:36:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:52.445 14:36:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:52.445 Cannot find device "nvmf_tgt_br" 00:22:52.445 14:36:59 -- nvmf/common.sh@154 -- # true 00:22:52.445 14:36:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.445 Cannot find device "nvmf_tgt_br2" 00:22:52.445 14:36:59 -- nvmf/common.sh@155 -- # true 00:22:52.445 14:36:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:52.445 14:36:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:52.445 Cannot find device "nvmf_tgt_br" 00:22:52.445 14:36:59 -- nvmf/common.sh@157 -- # true 00:22:52.445 14:36:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:52.445 Cannot find device "nvmf_tgt_br2" 00:22:52.445 14:36:59 -- nvmf/common.sh@158 -- # true 00:22:52.445 14:36:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:52.728 14:36:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:52.728 14:36:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.728 14:36:59 -- nvmf/common.sh@161 -- # true 00:22:52.728 14:36:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.728 14:36:59 -- nvmf/common.sh@162 -- # true 00:22:52.728 14:36:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.728 14:36:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.728 14:36:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.728 14:36:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.728 14:36:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.728 14:36:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.728 14:36:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.728 14:36:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:52.728 14:36:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:52.728 14:36:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:52.728 14:36:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:52.728 14:36:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:52.728 14:36:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:52.728 14:36:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.728 14:36:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.728 14:36:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.728 14:36:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:52.728 14:36:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:52.728 14:36:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.728 14:36:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.728 14:36:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.728 14:36:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.728 14:36:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.728 14:36:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:52.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:22:52.728 00:22:52.728 --- 10.0.0.2 ping statistics --- 00:22:52.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.728 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:52.728 14:36:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:52.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:52.729 00:22:52.729 --- 10.0.0.3 ping statistics --- 00:22:52.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.729 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:52.729 14:36:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:22:52.729 00:22:52.729 --- 10.0.0.1 ping statistics --- 00:22:52.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.729 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:52.729 14:36:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.729 14:36:59 -- nvmf/common.sh@421 -- # return 0 00:22:52.729 14:36:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:52.729 14:36:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.729 14:36:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:52.729 14:36:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:52.729 14:36:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.729 14:36:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:52.729 14:36:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:52.729 14:36:59 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:52.729 14:36:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:52.729 14:36:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.729 14:36:59 -- common/autotest_common.sh@10 -- # set +x 00:22:52.729 14:36:59 -- nvmf/common.sh@469 -- # nvmfpid=81535 00:22:52.729 14:36:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.729 14:36:59 -- nvmf/common.sh@470 -- # waitforlisten 81535 00:22:52.729 14:36:59 -- common/autotest_common.sh@829 -- # '[' -z 81535 ']' 00:22:52.729 14:36:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.729 14:36:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.729 14:36:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.729 14:36:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.729 14:36:59 -- common/autotest_common.sh@10 -- # set +x 00:22:52.987 [2024-12-06 14:36:59.734318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:52.987 [2024-12-06 14:36:59.734394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.987 [2024-12-06 14:36:59.874233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.247 [2024-12-06 14:37:00.013962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:53.247 [2024-12-06 14:37:00.014455] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.247 [2024-12-06 14:37:00.014500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.247 [2024-12-06 14:37:00.014515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.247 [2024-12-06 14:37:00.014612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.247 [2024-12-06 14:37:00.015125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.247 [2024-12-06 14:37:00.015269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.247 [2024-12-06 14:37:00.015279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.814 14:37:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.814 14:37:00 -- common/autotest_common.sh@862 -- # return 0 00:22:53.814 14:37:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:53.814 14:37:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:53.814 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 14:37:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:54.072 14:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.072 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 Malloc0 00:22:54.072 14:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:54.072 14:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.072 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 Delay0 00:22:54.072 14:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.072 14:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.072 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 [2024-12-06 14:37:00.871235] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.072 14:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:54.072 14:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.072 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 14:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:54.072 14:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.072 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 14:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.072 14:37:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.072 14:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 [2024-12-06 14:37:00.899500] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.072 14:37:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.072 14:37:00 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:54.375 14:37:01 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:54.375 14:37:01 -- common/autotest_common.sh@1187 -- # local i=0 00:22:54.375 14:37:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:54.375 14:37:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:54.375 14:37:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:56.281 14:37:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:56.281 14:37:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:56.281 14:37:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:56.281 14:37:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:56.281 14:37:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:56.281 14:37:03 -- common/autotest_common.sh@1197 -- # return 0 00:22:56.281 14:37:03 -- target/initiator_timeout.sh@35 -- # fio_pid=81617 00:22:56.281 14:37:03 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:56.281 14:37:03 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:56.281 [global] 00:22:56.281 thread=1 00:22:56.281 invalidate=1 00:22:56.281 rw=write 00:22:56.281 time_based=1 00:22:56.281 runtime=60 00:22:56.281 ioengine=libaio 00:22:56.281 direct=1 00:22:56.281 bs=4096 00:22:56.281 iodepth=1 00:22:56.281 norandommap=0 00:22:56.281 numjobs=1 00:22:56.281 00:22:56.281 verify_dump=1 00:22:56.281 verify_backlog=512 00:22:56.281 verify_state_save=0 00:22:56.281 do_verify=1 00:22:56.281 verify=crc32c-intel 00:22:56.281 [job0] 00:22:56.281 filename=/dev/nvme0n1 00:22:56.281 Could not set queue depth (nvme0n1) 00:22:56.539 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:56.539 fio-3.35 00:22:56.539 Starting 1 thread 00:22:59.142 14:37:06 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:59.142 14:37:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.142 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:22:59.403 true 00:22:59.403 14:37:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.403 14:37:06 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:59.403 14:37:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.403 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:22:59.403 true 00:22:59.403 14:37:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.403 14:37:06 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:59.403 14:37:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.403 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:22:59.403 true 00:22:59.403 14:37:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.403 14:37:06 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:59.403 14:37:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.403 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:22:59.403 true 00:22:59.403 14:37:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.403 14:37:06 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:02.688 14:37:09 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:02.688 14:37:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.688 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:23:02.688 true 00:23:02.688 14:37:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.688 14:37:09 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:02.688 14:37:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.688 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:23:02.688 true 00:23:02.688 14:37:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.688 14:37:09 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:02.688 14:37:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.688 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:23:02.688 true 00:23:02.688 14:37:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.688 14:37:09 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:02.688 14:37:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.688 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:23:02.688 true 00:23:02.688 14:37:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.688 14:37:09 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:02.688 14:37:09 -- target/initiator_timeout.sh@54 -- # wait 81617 00:23:58.926 00:23:58.926 job0: (groupid=0, jobs=1): err= 0: pid=81638: Fri Dec 6 14:38:03 2024 00:23:58.926 read: IOPS=742, BW=2970KiB/s (3041kB/s)(174MiB/60000msec) 00:23:58.926 slat (nsec): min=10591, max=95861, avg=14861.82, stdev=5518.46 00:23:58.926 clat (usec): min=164, max=2579, avg=221.82, stdev=28.82 00:23:58.926 lat (usec): min=176, max=2593, avg=236.68, stdev=29.51 00:23:58.926 clat percentiles (usec): 00:23:58.926 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:23:58.926 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:23:58.926 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 269], 00:23:58.926 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 371], 99.95th=[ 420], 00:23:58.926 | 99.99th=[ 635] 00:23:58.926 write: IOPS=744, BW=2978KiB/s (3050kB/s)(175MiB/60000msec); 0 zone resets 00:23:58.926 slat (usec): min=16, max=13581, avg=23.66, stdev=85.41 00:23:58.926 clat (usec): min=3, max=40446k, avg=1080.42, stdev=191362.11 00:23:58.926 lat (usec): min=144, max=40446k, avg=1104.08, stdev=191362.11 00:23:58.926 clat percentiles (usec): 00:23:58.926 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:23:58.926 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:23:58.926 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 217], 00:23:58.926 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 334], 99.95th=[ 367], 00:23:58.926 | 99.99th=[ 482] 00:23:58.926 bw ( KiB/s): min= 5296, max=11000, per=100.00%, avg=8950.79, stdev=1166.97, samples=39 00:23:58.926 iops : min= 1324, max= 2750, avg=2237.69, stdev=291.74, samples=39 00:23:58.926 lat (usec) : 4=0.01%, 250=93.28%, 500=6.71%, 750=0.01% 00:23:58.926 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:23:58.926 cpu : usr=0.46%, sys=2.08%, ctx=89231, majf=0, minf=5 00:23:58.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.927 issued rwts: total=44544,44672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.927 00:23:58.927 Run status group 0 (all jobs): 00:23:58.927 READ: bw=2970KiB/s (3041kB/s), 2970KiB/s-2970KiB/s (3041kB/s-3041kB/s), io=174MiB (182MB), run=60000-60000msec 00:23:58.927 WRITE: bw=2978KiB/s (3050kB/s), 2978KiB/s-2978KiB/s (3050kB/s-3050kB/s), io=175MiB (183MB), run=60000-60000msec 00:23:58.927 00:23:58.927 Disk stats (read/write): 00:23:58.927 nvme0n1: ios=44499/44544, merge=0/0, ticks=10166/8285, in_queue=18451, util=99.84% 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:58.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:58.927 14:38:03 -- common/autotest_common.sh@1208 -- # local i=0 00:23:58.927 14:38:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:58.927 14:38:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:58.927 14:38:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:58.927 14:38:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:58.927 14:38:03 -- common/autotest_common.sh@1220 -- # return 0 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:58.927 nvmf hotplug test: fio successful as expected 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.927 14:38:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.927 14:38:03 -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 14:38:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:58.927 14:38:03 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:58.927 14:38:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:58.927 14:38:03 -- nvmf/common.sh@116 -- # sync 00:23:58.927 14:38:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:58.927 14:38:03 -- nvmf/common.sh@119 -- # set +e 00:23:58.927 14:38:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:58.927 14:38:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:58.927 rmmod nvme_tcp 00:23:58.927 rmmod nvme_fabrics 00:23:58.927 rmmod nvme_keyring 00:23:58.927 14:38:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:58.927 14:38:03 -- nvmf/common.sh@123 -- # set -e 00:23:58.927 14:38:03 -- nvmf/common.sh@124 -- # return 0 00:23:58.927 14:38:03 -- nvmf/common.sh@477 -- # '[' -n 81535 ']' 00:23:58.927 14:38:03 -- nvmf/common.sh@478 -- # killprocess 81535 00:23:58.927 14:38:03 -- common/autotest_common.sh@936 -- # '[' -z 81535 ']' 00:23:58.927 14:38:03 -- common/autotest_common.sh@940 -- # kill -0 81535 00:23:58.927 14:38:03 -- common/autotest_common.sh@941 -- # uname 00:23:58.927 14:38:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:58.927 14:38:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81535 00:23:58.927 14:38:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:58.927 14:38:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:58.927 killing process with pid 81535 00:23:58.927 14:38:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81535' 00:23:58.927 14:38:03 -- common/autotest_common.sh@955 -- # kill 81535 00:23:58.927 14:38:03 -- common/autotest_common.sh@960 -- # wait 81535 00:23:58.927 14:38:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:58.927 14:38:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:58.927 14:38:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:58.927 14:38:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.927 14:38:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.927 14:38:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.927 14:38:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:58.927 ************************************ 00:23:58.927 END TEST nvmf_initiator_timeout 00:23:58.927 ************************************ 00:23:58.927 00:23:58.927 real 1m4.964s 00:23:58.927 user 4m6.355s 00:23:58.927 sys 0m9.190s 00:23:58.927 14:38:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:58.927 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 14:38:04 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:23:58.927 14:38:04 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:58.927 14:38:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.927 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 14:38:04 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:58.927 14:38:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.927 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 14:38:04 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:58.927 14:38:04 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:58.927 14:38:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:58.927 14:38:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.927 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 ************************************ 00:23:58.927 START TEST nvmf_multicontroller 00:23:58.927 ************************************ 00:23:58.927 14:38:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:58.927 * Looking for test storage... 00:23:58.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:58.927 14:38:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:58.927 14:38:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:58.927 14:38:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:58.927 14:38:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:58.927 14:38:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:58.927 14:38:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:58.927 14:38:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:58.927 14:38:04 -- scripts/common.sh@335 -- # IFS=.-: 00:23:58.927 14:38:04 -- scripts/common.sh@335 -- # read -ra ver1 00:23:58.927 14:38:04 -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.927 14:38:04 -- scripts/common.sh@336 -- # read -ra ver2 00:23:58.927 14:38:04 -- scripts/common.sh@337 -- # local 'op=<' 00:23:58.927 14:38:04 -- scripts/common.sh@339 -- # ver1_l=2 00:23:58.927 14:38:04 -- scripts/common.sh@340 -- # ver2_l=1 00:23:58.927 14:38:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:58.927 14:38:04 -- scripts/common.sh@343 -- # case "$op" in 00:23:58.927 14:38:04 -- scripts/common.sh@344 -- # : 1 00:23:58.927 14:38:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:58.927 14:38:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.927 14:38:04 -- scripts/common.sh@364 -- # decimal 1 00:23:58.927 14:38:04 -- scripts/common.sh@352 -- # local d=1 00:23:58.927 14:38:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.927 14:38:04 -- scripts/common.sh@354 -- # echo 1 00:23:58.927 14:38:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:58.927 14:38:04 -- scripts/common.sh@365 -- # decimal 2 00:23:58.927 14:38:04 -- scripts/common.sh@352 -- # local d=2 00:23:58.927 14:38:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.927 14:38:04 -- scripts/common.sh@354 -- # echo 2 00:23:58.927 14:38:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:58.927 14:38:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:58.927 14:38:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:58.927 14:38:04 -- scripts/common.sh@367 -- # return 0 00:23:58.927 14:38:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.927 14:38:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.927 --rc genhtml_branch_coverage=1 00:23:58.927 --rc genhtml_function_coverage=1 00:23:58.927 --rc genhtml_legend=1 00:23:58.927 --rc geninfo_all_blocks=1 00:23:58.927 --rc geninfo_unexecuted_blocks=1 00:23:58.927 00:23:58.927 ' 00:23:58.927 14:38:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.927 --rc genhtml_branch_coverage=1 00:23:58.927 --rc genhtml_function_coverage=1 00:23:58.927 --rc genhtml_legend=1 00:23:58.927 --rc geninfo_all_blocks=1 00:23:58.927 --rc geninfo_unexecuted_blocks=1 00:23:58.927 00:23:58.927 ' 00:23:58.927 14:38:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.927 --rc genhtml_branch_coverage=1 00:23:58.927 --rc genhtml_function_coverage=1 00:23:58.927 --rc genhtml_legend=1 00:23:58.927 --rc geninfo_all_blocks=1 00:23:58.927 --rc geninfo_unexecuted_blocks=1 00:23:58.927 00:23:58.927 ' 00:23:58.927 14:38:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.927 --rc genhtml_branch_coverage=1 00:23:58.927 --rc genhtml_function_coverage=1 00:23:58.927 --rc genhtml_legend=1 00:23:58.927 --rc geninfo_all_blocks=1 00:23:58.927 --rc geninfo_unexecuted_blocks=1 00:23:58.927 00:23:58.927 ' 00:23:58.927 14:38:04 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:58.927 14:38:04 -- nvmf/common.sh@7 -- # uname -s 00:23:58.927 14:38:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.927 14:38:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.927 14:38:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.927 14:38:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.927 14:38:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.927 14:38:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.927 14:38:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.927 14:38:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.927 14:38:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.927 14:38:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:23:58.927 14:38:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:23:58.927 14:38:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.927 14:38:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.927 14:38:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:58.927 14:38:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:58.927 14:38:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.927 14:38:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.927 14:38:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.927 14:38:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.927 14:38:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.927 14:38:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.927 14:38:04 -- paths/export.sh@5 -- # export PATH 00:23:58.927 14:38:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.927 14:38:04 -- nvmf/common.sh@46 -- # : 0 00:23:58.927 14:38:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:58.927 14:38:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:58.927 14:38:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:58.927 14:38:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.927 14:38:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.927 14:38:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:58.927 14:38:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:58.927 14:38:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:58.927 14:38:04 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.927 14:38:04 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.927 14:38:04 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:58.927 14:38:04 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:58.927 14:38:04 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.927 14:38:04 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:58.927 14:38:04 -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:58.927 14:38:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:58.927 14:38:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.927 14:38:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:58.927 14:38:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:58.927 14:38:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:58.927 14:38:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.927 14:38:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.927 14:38:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.927 14:38:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:58.927 14:38:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:58.927 14:38:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.927 14:38:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.927 14:38:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:58.927 14:38:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:58.927 14:38:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:58.927 14:38:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:58.927 14:38:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:58.927 14:38:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.927 14:38:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:58.927 14:38:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:58.928 14:38:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:58.928 14:38:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:58.928 14:38:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:58.928 14:38:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:58.928 Cannot find device "nvmf_tgt_br" 00:23:58.928 14:38:04 -- nvmf/common.sh@154 -- # true 00:23:58.928 14:38:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:58.928 Cannot find device "nvmf_tgt_br2" 00:23:58.928 14:38:04 -- nvmf/common.sh@155 -- # true 00:23:58.928 14:38:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:58.928 14:38:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:58.928 Cannot find device "nvmf_tgt_br" 00:23:58.928 14:38:04 -- nvmf/common.sh@157 -- # true 00:23:58.928 14:38:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:58.928 Cannot find device "nvmf_tgt_br2" 00:23:58.928 14:38:04 -- nvmf/common.sh@158 -- # true 00:23:58.928 14:38:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:58.928 14:38:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:58.928 14:38:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:58.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.928 14:38:04 -- nvmf/common.sh@161 -- # true 00:23:58.928 14:38:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:58.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.928 14:38:04 -- nvmf/common.sh@162 -- # true 00:23:58.928 14:38:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:58.928 14:38:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:58.928 14:38:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:58.928 14:38:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:58.928 14:38:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:58.928 14:38:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:58.928 14:38:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:58.928 14:38:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:58.928 14:38:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:58.928 14:38:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:58.928 14:38:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:58.928 14:38:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:58.928 14:38:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:58.928 14:38:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:58.928 14:38:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:58.928 14:38:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:58.928 14:38:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:58.928 14:38:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:58.928 14:38:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:58.928 14:38:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:58.928 14:38:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:58.928 14:38:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:58.928 14:38:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:58.928 14:38:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:58.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:23:58.928 00:23:58.928 --- 10.0.0.2 ping statistics --- 00:23:58.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.928 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:58.928 14:38:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:58.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:58.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:23:58.928 00:23:58.928 --- 10.0.0.3 ping statistics --- 00:23:58.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.928 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:58.928 14:38:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:58.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:58.928 00:23:58.928 --- 10.0.0.1 ping statistics --- 00:23:58.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.928 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:58.928 14:38:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.928 14:38:04 -- nvmf/common.sh@421 -- # return 0 00:23:58.928 14:38:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:58.928 14:38:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.928 14:38:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:58.928 14:38:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:58.928 14:38:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.928 14:38:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:58.928 14:38:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:58.928 14:38:04 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:58.928 14:38:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:58.928 14:38:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.928 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 14:38:04 -- nvmf/common.sh@469 -- # nvmfpid=82477 00:23:58.928 14:38:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:58.928 14:38:04 -- nvmf/common.sh@470 -- # waitforlisten 82477 00:23:58.928 14:38:04 -- common/autotest_common.sh@829 -- # '[' -z 82477 ']' 00:23:58.928 14:38:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.928 14:38:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.928 14:38:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.928 14:38:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.928 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 [2024-12-06 14:38:04.819585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:58.928 [2024-12-06 14:38:04.819680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.928 [2024-12-06 14:38:04.958184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.928 [2024-12-06 14:38:05.082302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.928 [2024-12-06 14:38:05.082514] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.928 [2024-12-06 14:38:05.082537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.928 [2024-12-06 14:38:05.082549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.928 [2024-12-06 14:38:05.082716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.928 [2024-12-06 14:38:05.083208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.928 [2024-12-06 14:38:05.083219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.928 14:38:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.928 14:38:05 -- common/autotest_common.sh@862 -- # return 0 00:23:58.928 14:38:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.928 14:38:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 14:38:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.928 14:38:05 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 [2024-12-06 14:38:05.799109] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.928 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.928 14:38:05 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 Malloc0 00:23:58.928 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.928 14:38:05 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.928 14:38:05 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.928 14:38:05 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 [2024-12-06 14:38:05.866520] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.928 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.928 14:38:05 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:58.928 [2024-12-06 14:38:05.874439] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.928 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.928 14:38:05 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:58.928 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.928 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:59.185 Malloc1 00:23:59.185 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.185 14:38:05 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:59.185 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.185 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:59.185 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.185 14:38:05 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:59.185 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.185 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:59.185 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.185 14:38:05 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:59.185 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.185 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:59.185 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.185 14:38:05 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:59.185 14:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.185 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:23:59.185 14:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.185 14:38:05 -- host/multicontroller.sh@44 -- # bdevperf_pid=82529 00:23:59.185 14:38:05 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:59.185 14:38:05 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.185 14:38:05 -- host/multicontroller.sh@47 -- # waitforlisten 82529 /var/tmp/bdevperf.sock 00:23:59.185 14:38:05 -- common/autotest_common.sh@829 -- # '[' -z 82529 ']' 00:23:59.185 14:38:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.185 14:38:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.185 14:38:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.185 14:38:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.185 14:38:05 -- common/autotest_common.sh@10 -- # set +x 00:24:00.117 14:38:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.117 14:38:06 -- common/autotest_common.sh@862 -- # return 0 00:24:00.117 14:38:06 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:00.117 14:38:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.117 14:38:06 -- common/autotest_common.sh@10 -- # set +x 00:24:00.117 NVMe0n1 00:24:00.117 14:38:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.118 14:38:06 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.118 14:38:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.118 14:38:06 -- common/autotest_common.sh@10 -- # set +x 00:24:00.118 14:38:06 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:00.118 14:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.118 1 00:24:00.118 14:38:07 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:00.118 14:38:07 -- common/autotest_common.sh@650 -- # local es=0 00:24:00.118 14:38:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:00.118 14:38:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:00.118 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.118 2024/12/06 14:38:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:00.118 request: 00:24:00.118 { 00:24:00.118 "method": "bdev_nvme_attach_controller", 00:24:00.118 "params": { 00:24:00.118 "name": "NVMe0", 00:24:00.118 "trtype": "tcp", 00:24:00.118 "traddr": "10.0.0.2", 00:24:00.118 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:00.118 "hostaddr": "10.0.0.2", 00:24:00.118 "hostsvcid": "60000", 00:24:00.118 "adrfam": "ipv4", 00:24:00.118 "trsvcid": "4420", 00:24:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:24:00.118 } 00:24:00.118 } 00:24:00.118 Got JSON-RPC error response 00:24:00.118 GoRPCClient: error on JSON-RPC call 00:24:00.118 14:38:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # es=1 00:24:00.118 14:38:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:00.118 14:38:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:00.118 14:38:07 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:00.118 14:38:07 -- common/autotest_common.sh@650 -- # local es=0 00:24:00.118 14:38:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:00.118 14:38:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:00.118 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.118 2024/12/06 14:38:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:00.118 request: 00:24:00.118 { 00:24:00.118 "method": "bdev_nvme_attach_controller", 00:24:00.118 "params": { 00:24:00.118 "name": "NVMe0", 00:24:00.118 "trtype": "tcp", 00:24:00.118 "traddr": "10.0.0.2", 00:24:00.118 "hostaddr": "10.0.0.2", 00:24:00.118 "hostsvcid": "60000", 00:24:00.118 "adrfam": "ipv4", 00:24:00.118 "trsvcid": "4420", 00:24:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:24:00.118 } 00:24:00.118 } 00:24:00.118 Got JSON-RPC error response 00:24:00.118 GoRPCClient: error on JSON-RPC call 00:24:00.118 14:38:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # es=1 00:24:00.118 14:38:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:00.118 14:38:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:00.118 14:38:07 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@650 -- # local es=0 00:24:00.118 14:38:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.118 2024/12/06 14:38:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:24:00.118 request: 00:24:00.118 { 00:24:00.118 "method": "bdev_nvme_attach_controller", 00:24:00.118 "params": { 00:24:00.118 "name": "NVMe0", 00:24:00.118 "trtype": "tcp", 00:24:00.118 "traddr": "10.0.0.2", 00:24:00.118 "hostaddr": "10.0.0.2", 00:24:00.118 "hostsvcid": "60000", 00:24:00.118 "adrfam": "ipv4", 00:24:00.118 "trsvcid": "4420", 00:24:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.118 "multipath": "disable" 00:24:00.118 } 00:24:00.118 } 00:24:00.118 Got JSON-RPC error response 00:24:00.118 GoRPCClient: error on JSON-RPC call 00:24:00.118 14:38:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # es=1 00:24:00.118 14:38:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:00.118 14:38:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:00.118 14:38:07 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:00.118 14:38:07 -- common/autotest_common.sh@650 -- # local es=0 00:24:00.118 14:38:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:00.118 14:38:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:00.118 14:38:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:00.118 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.118 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.118 2024/12/06 14:38:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:00.118 request: 00:24:00.118 { 00:24:00.118 "method": "bdev_nvme_attach_controller", 00:24:00.118 "params": { 00:24:00.118 "name": "NVMe0", 00:24:00.118 "trtype": "tcp", 00:24:00.118 "traddr": "10.0.0.2", 00:24:00.118 "hostaddr": "10.0.0.2", 00:24:00.118 "hostsvcid": "60000", 00:24:00.118 "adrfam": "ipv4", 00:24:00.118 "trsvcid": "4420", 00:24:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.118 "multipath": "failover" 00:24:00.118 } 00:24:00.118 } 00:24:00.118 Got JSON-RPC error response 00:24:00.118 GoRPCClient: error on JSON-RPC call 00:24:00.118 14:38:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@653 -- # es=1 00:24:00.118 14:38:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:00.118 14:38:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:00.118 14:38:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:00.119 14:38:07 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.119 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.119 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 00:24:00.376 14:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.376 14:38:07 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.376 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.376 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 14:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.376 14:38:07 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:00.376 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.376 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 00:24:00.376 14:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.376 14:38:07 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.376 14:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.376 14:38:07 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:00.377 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:00.377 14:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.377 14:38:07 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:00.377 14:38:07 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.747 0 00:24:01.747 14:38:08 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:01.747 14:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.747 14:38:08 -- common/autotest_common.sh@10 -- # set +x 00:24:01.747 14:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.747 14:38:08 -- host/multicontroller.sh@100 -- # killprocess 82529 00:24:01.747 14:38:08 -- common/autotest_common.sh@936 -- # '[' -z 82529 ']' 00:24:01.747 14:38:08 -- common/autotest_common.sh@940 -- # kill -0 82529 00:24:01.747 14:38:08 -- common/autotest_common.sh@941 -- # uname 00:24:01.747 14:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.747 14:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82529 00:24:01.747 14:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.747 14:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.747 14:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82529' 00:24:01.747 killing process with pid 82529 00:24:01.747 14:38:08 -- common/autotest_common.sh@955 -- # kill 82529 00:24:01.747 14:38:08 -- common/autotest_common.sh@960 -- # wait 82529 00:24:01.747 14:38:08 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.747 14:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.747 14:38:08 -- common/autotest_common.sh@10 -- # set +x 00:24:01.747 14:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.747 14:38:08 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:01.747 14:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.747 14:38:08 -- common/autotest_common.sh@10 -- # set +x 00:24:01.747 14:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.747 14:38:08 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:01.747 14:38:08 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:01.747 14:38:08 -- common/autotest_common.sh@1607 -- # read -r file 00:24:01.747 14:38:08 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:24:01.747 14:38:08 -- common/autotest_common.sh@1606 -- # sort -u 00:24:01.747 14:38:08 -- common/autotest_common.sh@1608 -- # cat 00:24:01.747 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:01.747 [2024-12-06 14:38:05.997890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:01.747 [2024-12-06 14:38:05.998027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82529 ] 00:24:01.747 [2024-12-06 14:38:06.134385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.747 [2024-12-06 14:38:06.240061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.747 [2024-12-06 14:38:07.204040] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name ca7d4dc6-27d8-4512-b24f-0d9170db175d already exists 00:24:01.747 [2024-12-06 14:38:07.204096] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:ca7d4dc6-27d8-4512-b24f-0d9170db175d alias for bdev NVMe1n1 00:24:01.747 [2024-12-06 14:38:07.204127] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:01.747 Running I/O for 1 seconds... 00:24:01.747 00:24:01.747 Latency(us) 00:24:01.747 [2024-12-06T14:38:08.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.747 [2024-12-06T14:38:08.717Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:01.747 NVMe0n1 : 1.00 19715.94 77.02 0.00 0.00 6482.99 3693.85 11677.32 00:24:01.747 [2024-12-06T14:38:08.717Z] =================================================================================================================== 00:24:01.747 [2024-12-06T14:38:08.717Z] Total : 19715.94 77.02 0.00 0.00 6482.99 3693.85 11677.32 00:24:01.747 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.747 00:24:01.747 Latency(us) 00:24:01.747 [2024-12-06T14:38:08.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.747 [2024-12-06T14:38:08.717Z] =================================================================================================================== 00:24:01.747 [2024-12-06T14:38:08.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.747 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:01.747 14:38:08 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:01.747 14:38:08 -- common/autotest_common.sh@1607 -- # read -r file 00:24:01.747 14:38:08 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:01.747 14:38:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.747 14:38:08 -- nvmf/common.sh@116 -- # sync 00:24:02.004 14:38:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:02.004 14:38:08 -- nvmf/common.sh@119 -- # set +e 00:24:02.004 14:38:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:02.004 14:38:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:02.004 rmmod nvme_tcp 00:24:02.004 rmmod nvme_fabrics 00:24:02.004 rmmod nvme_keyring 00:24:02.004 14:38:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:02.004 14:38:08 -- nvmf/common.sh@123 -- # set -e 00:24:02.004 14:38:08 -- nvmf/common.sh@124 -- # return 0 00:24:02.004 14:38:08 -- nvmf/common.sh@477 -- # '[' -n 82477 ']' 00:24:02.004 14:38:08 -- nvmf/common.sh@478 -- # killprocess 82477 00:24:02.004 14:38:08 -- common/autotest_common.sh@936 -- # '[' -z 82477 ']' 00:24:02.004 14:38:08 -- common/autotest_common.sh@940 -- # kill -0 82477 00:24:02.004 14:38:08 -- common/autotest_common.sh@941 -- # uname 00:24:02.004 14:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:02.004 14:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82477 00:24:02.004 14:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:02.004 14:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:02.004 killing process with pid 82477 00:24:02.004 14:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82477' 00:24:02.004 14:38:08 -- common/autotest_common.sh@955 -- # kill 82477 00:24:02.004 14:38:08 -- common/autotest_common.sh@960 -- # wait 82477 00:24:02.262 14:38:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:02.262 14:38:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:02.262 14:38:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:02.262 14:38:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.262 14:38:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:02.262 14:38:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.262 14:38:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.262 14:38:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.262 14:38:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:02.262 00:24:02.262 real 0m5.012s 00:24:02.262 user 0m15.185s 00:24:02.262 sys 0m1.087s 00:24:02.262 14:38:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:02.262 14:38:09 -- common/autotest_common.sh@10 -- # set +x 00:24:02.262 ************************************ 00:24:02.262 END TEST nvmf_multicontroller 00:24:02.262 ************************************ 00:24:02.262 14:38:09 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:02.262 14:38:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.262 14:38:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.262 14:38:09 -- common/autotest_common.sh@10 -- # set +x 00:24:02.262 ************************************ 00:24:02.262 START TEST nvmf_aer 00:24:02.262 ************************************ 00:24:02.262 14:38:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:02.521 * Looking for test storage... 00:24:02.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:02.521 14:38:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:02.521 14:38:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:02.521 14:38:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:02.521 14:38:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:02.521 14:38:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:02.521 14:38:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.521 14:38:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.521 14:38:09 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.521 14:38:09 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.521 14:38:09 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.521 14:38:09 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.521 14:38:09 -- scripts/common.sh@337 -- # local 'op=<' 00:24:02.521 14:38:09 -- scripts/common.sh@339 -- # ver1_l=2 00:24:02.521 14:38:09 -- scripts/common.sh@340 -- # ver2_l=1 00:24:02.521 14:38:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.521 14:38:09 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.521 14:38:09 -- scripts/common.sh@344 -- # : 1 00:24:02.521 14:38:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.521 14:38:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.521 14:38:09 -- scripts/common.sh@364 -- # decimal 1 00:24:02.521 14:38:09 -- scripts/common.sh@352 -- # local d=1 00:24:02.521 14:38:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.521 14:38:09 -- scripts/common.sh@354 -- # echo 1 00:24:02.521 14:38:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.521 14:38:09 -- scripts/common.sh@365 -- # decimal 2 00:24:02.521 14:38:09 -- scripts/common.sh@352 -- # local d=2 00:24:02.521 14:38:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.521 14:38:09 -- scripts/common.sh@354 -- # echo 2 00:24:02.521 14:38:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.521 14:38:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.521 14:38:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.521 14:38:09 -- scripts/common.sh@367 -- # return 0 00:24:02.521 14:38:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.521 14:38:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.521 --rc genhtml_branch_coverage=1 00:24:02.521 --rc genhtml_function_coverage=1 00:24:02.521 --rc genhtml_legend=1 00:24:02.521 --rc geninfo_all_blocks=1 00:24:02.521 --rc geninfo_unexecuted_blocks=1 00:24:02.521 00:24:02.521 ' 00:24:02.521 14:38:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.521 --rc genhtml_branch_coverage=1 00:24:02.521 --rc genhtml_function_coverage=1 00:24:02.521 --rc genhtml_legend=1 00:24:02.521 --rc geninfo_all_blocks=1 00:24:02.521 --rc geninfo_unexecuted_blocks=1 00:24:02.521 00:24:02.521 ' 00:24:02.521 14:38:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.521 --rc genhtml_branch_coverage=1 00:24:02.521 --rc genhtml_function_coverage=1 00:24:02.521 --rc genhtml_legend=1 00:24:02.521 --rc geninfo_all_blocks=1 00:24:02.521 --rc geninfo_unexecuted_blocks=1 00:24:02.521 00:24:02.521 ' 00:24:02.521 14:38:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.521 --rc genhtml_branch_coverage=1 00:24:02.521 --rc genhtml_function_coverage=1 00:24:02.521 --rc genhtml_legend=1 00:24:02.521 --rc geninfo_all_blocks=1 00:24:02.521 --rc geninfo_unexecuted_blocks=1 00:24:02.521 00:24:02.521 ' 00:24:02.521 14:38:09 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:02.521 14:38:09 -- nvmf/common.sh@7 -- # uname -s 00:24:02.521 14:38:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.521 14:38:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.521 14:38:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.521 14:38:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.521 14:38:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.521 14:38:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.521 14:38:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.521 14:38:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.521 14:38:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.521 14:38:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.521 14:38:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:02.521 14:38:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:02.521 14:38:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.521 14:38:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.521 14:38:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:02.521 14:38:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.521 14:38:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.521 14:38:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.521 14:38:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.521 14:38:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.521 14:38:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.521 14:38:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.521 14:38:09 -- paths/export.sh@5 -- # export PATH 00:24:02.521 14:38:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.521 14:38:09 -- nvmf/common.sh@46 -- # : 0 00:24:02.521 14:38:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.521 14:38:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.521 14:38:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.521 14:38:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.521 14:38:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.521 14:38:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.521 14:38:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.521 14:38:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.521 14:38:09 -- host/aer.sh@11 -- # nvmftestinit 00:24:02.521 14:38:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.521 14:38:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.521 14:38:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.521 14:38:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.521 14:38:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.521 14:38:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.521 14:38:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.521 14:38:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.521 14:38:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:02.521 14:38:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:02.521 14:38:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:02.521 14:38:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:02.521 14:38:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:02.521 14:38:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:02.521 14:38:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.521 14:38:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.521 14:38:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:02.521 14:38:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:02.521 14:38:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:02.521 14:38:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:02.521 14:38:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:02.521 14:38:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.521 14:38:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:02.521 14:38:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:02.521 14:38:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:02.521 14:38:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:02.521 14:38:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:02.521 14:38:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:02.521 Cannot find device "nvmf_tgt_br" 00:24:02.521 14:38:09 -- nvmf/common.sh@154 -- # true 00:24:02.521 14:38:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.521 Cannot find device "nvmf_tgt_br2" 00:24:02.521 14:38:09 -- nvmf/common.sh@155 -- # true 00:24:02.521 14:38:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:02.521 14:38:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:02.521 Cannot find device "nvmf_tgt_br" 00:24:02.521 14:38:09 -- nvmf/common.sh@157 -- # true 00:24:02.521 14:38:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:02.521 Cannot find device "nvmf_tgt_br2" 00:24:02.521 14:38:09 -- nvmf/common.sh@158 -- # true 00:24:02.521 14:38:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:02.780 14:38:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:02.780 14:38:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.780 14:38:09 -- nvmf/common.sh@161 -- # true 00:24:02.780 14:38:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.780 14:38:09 -- nvmf/common.sh@162 -- # true 00:24:02.780 14:38:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:02.780 14:38:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.780 14:38:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.780 14:38:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.780 14:38:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.780 14:38:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.780 14:38:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.780 14:38:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:02.780 14:38:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:02.780 14:38:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:02.780 14:38:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:02.780 14:38:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:02.780 14:38:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:02.780 14:38:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.780 14:38:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.780 14:38:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.780 14:38:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:02.780 14:38:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:02.780 14:38:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.780 14:38:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.780 14:38:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.780 14:38:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.780 14:38:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.780 14:38:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:02.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:24:02.780 00:24:02.780 --- 10.0.0.2 ping statistics --- 00:24:02.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.780 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:02.780 14:38:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:02.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:24:02.780 00:24:02.780 --- 10.0.0.3 ping statistics --- 00:24:02.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.780 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:02.780 14:38:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:24:02.780 00:24:02.780 --- 10.0.0.1 ping statistics --- 00:24:02.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.780 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:02.780 14:38:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.780 14:38:09 -- nvmf/common.sh@421 -- # return 0 00:24:02.780 14:38:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:02.780 14:38:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.780 14:38:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:02.780 14:38:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:02.780 14:38:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.780 14:38:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:02.780 14:38:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:03.039 14:38:09 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:03.039 14:38:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:03.039 14:38:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.039 14:38:09 -- common/autotest_common.sh@10 -- # set +x 00:24:03.039 14:38:09 -- nvmf/common.sh@469 -- # nvmfpid=82786 00:24:03.039 14:38:09 -- nvmf/common.sh@470 -- # waitforlisten 82786 00:24:03.039 14:38:09 -- common/autotest_common.sh@829 -- # '[' -z 82786 ']' 00:24:03.039 14:38:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.039 14:38:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.039 14:38:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.039 14:38:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.039 14:38:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.039 14:38:09 -- common/autotest_common.sh@10 -- # set +x 00:24:03.039 [2024-12-06 14:38:09.810594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:03.039 [2024-12-06 14:38:09.810688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.039 [2024-12-06 14:38:09.947611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.299 [2024-12-06 14:38:10.075597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:03.299 [2024-12-06 14:38:10.075778] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.299 [2024-12-06 14:38:10.075794] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.299 [2024-12-06 14:38:10.075805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.299 [2024-12-06 14:38:10.076002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.299 [2024-12-06 14:38:10.076532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.299 [2024-12-06 14:38:10.076646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.299 [2024-12-06 14:38:10.076650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.865 14:38:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.865 14:38:10 -- common/autotest_common.sh@862 -- # return 0 00:24:03.865 14:38:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:03.865 14:38:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:03.865 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:03.865 14:38:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.865 14:38:10 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.865 14:38:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.865 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:03.865 [2024-12-06 14:38:10.828498] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.124 14:38:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.124 14:38:10 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:04.124 14:38:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.124 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:04.124 Malloc0 00:24:04.124 14:38:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.124 14:38:10 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:04.124 14:38:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.124 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:04.124 14:38:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.124 14:38:10 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.124 14:38:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.124 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:04.124 14:38:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.124 14:38:10 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.124 14:38:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.124 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:04.124 [2024-12-06 14:38:10.895394] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.124 14:38:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.124 14:38:10 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:04.124 14:38:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.124 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:24:04.124 [2024-12-06 14:38:10.903074] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:04.124 [ 00:24:04.124 { 00:24:04.124 "allow_any_host": true, 00:24:04.124 "hosts": [], 00:24:04.124 "listen_addresses": [], 00:24:04.124 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:04.124 "subtype": "Discovery" 00:24:04.124 }, 00:24:04.124 { 00:24:04.124 "allow_any_host": true, 00:24:04.124 "hosts": [], 00:24:04.124 "listen_addresses": [ 00:24:04.124 { 00:24:04.124 "adrfam": "IPv4", 00:24:04.124 "traddr": "10.0.0.2", 00:24:04.124 "transport": "TCP", 00:24:04.124 "trsvcid": "4420", 00:24:04.124 "trtype": "TCP" 00:24:04.124 } 00:24:04.124 ], 00:24:04.124 "max_cntlid": 65519, 00:24:04.124 "max_namespaces": 2, 00:24:04.124 "min_cntlid": 1, 00:24:04.124 "model_number": "SPDK bdev Controller", 00:24:04.124 "namespaces": [ 00:24:04.124 { 00:24:04.124 "bdev_name": "Malloc0", 00:24:04.124 "name": "Malloc0", 00:24:04.124 "nguid": "974A022790794898B4E3762FE7933B76", 00:24:04.124 "nsid": 1, 00:24:04.124 "uuid": "974a0227-9079-4898-b4e3-762fe7933b76" 00:24:04.124 } 00:24:04.124 ], 00:24:04.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.124 "serial_number": "SPDK00000000000001", 00:24:04.124 "subtype": "NVMe" 00:24:04.124 } 00:24:04.124 ] 00:24:04.124 14:38:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.124 14:38:10 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:04.124 14:38:10 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:04.124 14:38:10 -- host/aer.sh@33 -- # aerpid=82846 00:24:04.124 14:38:10 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:04.124 14:38:10 -- common/autotest_common.sh@1254 -- # local i=0 00:24:04.124 14:38:10 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:04.124 14:38:10 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.124 14:38:10 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:24:04.124 14:38:10 -- common/autotest_common.sh@1257 -- # i=1 00:24:04.124 14:38:10 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:04.124 14:38:11 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.124 14:38:11 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:24:04.124 14:38:11 -- common/autotest_common.sh@1257 -- # i=2 00:24:04.124 14:38:11 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:04.383 14:38:11 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.383 14:38:11 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.383 14:38:11 -- common/autotest_common.sh@1265 -- # return 0 00:24:04.383 14:38:11 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:04.383 14:38:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.383 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.383 Malloc1 00:24:04.383 14:38:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.383 14:38:11 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:04.383 14:38:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.383 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.383 14:38:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.383 14:38:11 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:04.383 14:38:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.383 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.383 Asynchronous Event Request test 00:24:04.383 Attaching to 10.0.0.2 00:24:04.383 Attached to 10.0.0.2 00:24:04.383 Registering asynchronous event callbacks... 00:24:04.383 Starting namespace attribute notice tests for all controllers... 00:24:04.383 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:04.383 aer_cb - Changed Namespace 00:24:04.383 Cleaning up... 00:24:04.383 [ 00:24:04.383 { 00:24:04.383 "allow_any_host": true, 00:24:04.383 "hosts": [], 00:24:04.383 "listen_addresses": [], 00:24:04.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:04.383 "subtype": "Discovery" 00:24:04.383 }, 00:24:04.383 { 00:24:04.383 "allow_any_host": true, 00:24:04.383 "hosts": [], 00:24:04.383 "listen_addresses": [ 00:24:04.383 { 00:24:04.383 "adrfam": "IPv4", 00:24:04.383 "traddr": "10.0.0.2", 00:24:04.383 "transport": "TCP", 00:24:04.383 "trsvcid": "4420", 00:24:04.383 "trtype": "TCP" 00:24:04.383 } 00:24:04.383 ], 00:24:04.383 "max_cntlid": 65519, 00:24:04.383 "max_namespaces": 2, 00:24:04.383 "min_cntlid": 1, 00:24:04.383 "model_number": "SPDK bdev Controller", 00:24:04.383 "namespaces": [ 00:24:04.383 { 00:24:04.383 "bdev_name": "Malloc0", 00:24:04.383 "name": "Malloc0", 00:24:04.383 "nguid": "974A022790794898B4E3762FE7933B76", 00:24:04.383 "nsid": 1, 00:24:04.383 "uuid": "974a0227-9079-4898-b4e3-762fe7933b76" 00:24:04.383 }, 00:24:04.383 { 00:24:04.383 "bdev_name": "Malloc1", 00:24:04.383 "name": "Malloc1", 00:24:04.383 "nguid": "4ABD739B96F04A939313A722C7BF7A05", 00:24:04.383 "nsid": 2, 00:24:04.383 "uuid": "4abd739b-96f0-4a93-9313-a722c7bf7a05" 00:24:04.383 } 00:24:04.383 ], 00:24:04.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.383 "serial_number": "SPDK00000000000001", 00:24:04.383 "subtype": "NVMe" 00:24:04.383 } 00:24:04.383 ] 00:24:04.383 14:38:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.383 14:38:11 -- host/aer.sh@43 -- # wait 82846 00:24:04.383 14:38:11 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:04.383 14:38:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.383 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.383 14:38:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.383 14:38:11 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:04.383 14:38:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.383 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.383 14:38:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.383 14:38:11 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.383 14:38:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.383 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.383 14:38:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.383 14:38:11 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:04.383 14:38:11 -- host/aer.sh@51 -- # nvmftestfini 00:24:04.383 14:38:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:04.383 14:38:11 -- nvmf/common.sh@116 -- # sync 00:24:04.383 14:38:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:04.383 14:38:11 -- nvmf/common.sh@119 -- # set +e 00:24:04.383 14:38:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:04.383 14:38:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:04.383 rmmod nvme_tcp 00:24:04.383 rmmod nvme_fabrics 00:24:04.383 rmmod nvme_keyring 00:24:04.641 14:38:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:04.641 14:38:11 -- nvmf/common.sh@123 -- # set -e 00:24:04.641 14:38:11 -- nvmf/common.sh@124 -- # return 0 00:24:04.641 14:38:11 -- nvmf/common.sh@477 -- # '[' -n 82786 ']' 00:24:04.641 14:38:11 -- nvmf/common.sh@478 -- # killprocess 82786 00:24:04.641 14:38:11 -- common/autotest_common.sh@936 -- # '[' -z 82786 ']' 00:24:04.641 14:38:11 -- common/autotest_common.sh@940 -- # kill -0 82786 00:24:04.641 14:38:11 -- common/autotest_common.sh@941 -- # uname 00:24:04.641 14:38:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.641 14:38:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82786 00:24:04.641 14:38:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:04.641 killing process with pid 82786 00:24:04.641 14:38:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:04.641 14:38:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82786' 00:24:04.641 14:38:11 -- common/autotest_common.sh@955 -- # kill 82786 00:24:04.641 [2024-12-06 14:38:11.401582] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:04.642 14:38:11 -- common/autotest_common.sh@960 -- # wait 82786 00:24:04.900 14:38:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:04.900 14:38:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:04.900 14:38:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:04.900 14:38:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.900 14:38:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:04.900 14:38:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.900 14:38:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.900 14:38:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.900 14:38:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:04.900 00:24:04.900 real 0m2.490s 00:24:04.900 user 0m6.485s 00:24:04.900 sys 0m0.694s 00:24:04.900 14:38:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:04.900 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.900 ************************************ 00:24:04.900 END TEST nvmf_aer 00:24:04.900 ************************************ 00:24:04.900 14:38:11 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:04.900 14:38:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:04.900 14:38:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.900 14:38:11 -- common/autotest_common.sh@10 -- # set +x 00:24:04.900 ************************************ 00:24:04.900 START TEST nvmf_async_init 00:24:04.900 ************************************ 00:24:04.900 14:38:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:04.900 * Looking for test storage... 00:24:04.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:04.900 14:38:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:04.900 14:38:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:04.900 14:38:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:05.159 14:38:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:05.159 14:38:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:05.159 14:38:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:05.159 14:38:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:05.159 14:38:11 -- scripts/common.sh@335 -- # IFS=.-: 00:24:05.159 14:38:11 -- scripts/common.sh@335 -- # read -ra ver1 00:24:05.159 14:38:11 -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.159 14:38:11 -- scripts/common.sh@336 -- # read -ra ver2 00:24:05.159 14:38:11 -- scripts/common.sh@337 -- # local 'op=<' 00:24:05.159 14:38:11 -- scripts/common.sh@339 -- # ver1_l=2 00:24:05.159 14:38:11 -- scripts/common.sh@340 -- # ver2_l=1 00:24:05.159 14:38:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:05.159 14:38:11 -- scripts/common.sh@343 -- # case "$op" in 00:24:05.159 14:38:11 -- scripts/common.sh@344 -- # : 1 00:24:05.159 14:38:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:05.159 14:38:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.159 14:38:11 -- scripts/common.sh@364 -- # decimal 1 00:24:05.159 14:38:11 -- scripts/common.sh@352 -- # local d=1 00:24:05.159 14:38:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.159 14:38:11 -- scripts/common.sh@354 -- # echo 1 00:24:05.159 14:38:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:05.159 14:38:11 -- scripts/common.sh@365 -- # decimal 2 00:24:05.159 14:38:11 -- scripts/common.sh@352 -- # local d=2 00:24:05.159 14:38:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.159 14:38:11 -- scripts/common.sh@354 -- # echo 2 00:24:05.159 14:38:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:05.159 14:38:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:05.159 14:38:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:05.159 14:38:11 -- scripts/common.sh@367 -- # return 0 00:24:05.159 14:38:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.159 14:38:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:05.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.159 --rc genhtml_branch_coverage=1 00:24:05.159 --rc genhtml_function_coverage=1 00:24:05.159 --rc genhtml_legend=1 00:24:05.159 --rc geninfo_all_blocks=1 00:24:05.159 --rc geninfo_unexecuted_blocks=1 00:24:05.159 00:24:05.159 ' 00:24:05.159 14:38:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:05.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.160 --rc genhtml_branch_coverage=1 00:24:05.160 --rc genhtml_function_coverage=1 00:24:05.160 --rc genhtml_legend=1 00:24:05.160 --rc geninfo_all_blocks=1 00:24:05.160 --rc geninfo_unexecuted_blocks=1 00:24:05.160 00:24:05.160 ' 00:24:05.160 14:38:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:05.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.160 --rc genhtml_branch_coverage=1 00:24:05.160 --rc genhtml_function_coverage=1 00:24:05.160 --rc genhtml_legend=1 00:24:05.160 --rc geninfo_all_blocks=1 00:24:05.160 --rc geninfo_unexecuted_blocks=1 00:24:05.160 00:24:05.160 ' 00:24:05.160 14:38:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:05.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.160 --rc genhtml_branch_coverage=1 00:24:05.160 --rc genhtml_function_coverage=1 00:24:05.160 --rc genhtml_legend=1 00:24:05.160 --rc geninfo_all_blocks=1 00:24:05.160 --rc geninfo_unexecuted_blocks=1 00:24:05.160 00:24:05.160 ' 00:24:05.160 14:38:11 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.160 14:38:11 -- nvmf/common.sh@7 -- # uname -s 00:24:05.160 14:38:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.160 14:38:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.160 14:38:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.160 14:38:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.160 14:38:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.160 14:38:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.160 14:38:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.160 14:38:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.160 14:38:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.160 14:38:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.160 14:38:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:05.160 14:38:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:05.160 14:38:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.160 14:38:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.160 14:38:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.160 14:38:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.160 14:38:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.160 14:38:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.160 14:38:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.160 14:38:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.160 14:38:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.160 14:38:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.160 14:38:11 -- paths/export.sh@5 -- # export PATH 00:24:05.160 14:38:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.160 14:38:11 -- nvmf/common.sh@46 -- # : 0 00:24:05.160 14:38:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:05.160 14:38:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:05.160 14:38:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:05.160 14:38:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.160 14:38:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.160 14:38:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:05.160 14:38:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:05.160 14:38:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:05.160 14:38:11 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:05.160 14:38:11 -- host/async_init.sh@14 -- # null_block_size=512 00:24:05.160 14:38:11 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:05.160 14:38:11 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:05.160 14:38:11 -- host/async_init.sh@20 -- # uuidgen 00:24:05.160 14:38:11 -- host/async_init.sh@20 -- # tr -d - 00:24:05.160 14:38:11 -- host/async_init.sh@20 -- # nguid=b1ea89cf86b34f62a37cf20e7a0d6b6a 00:24:05.160 14:38:11 -- host/async_init.sh@22 -- # nvmftestinit 00:24:05.160 14:38:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:05.160 14:38:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.160 14:38:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:05.160 14:38:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:05.160 14:38:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:05.160 14:38:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.160 14:38:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.160 14:38:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.160 14:38:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:05.160 14:38:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:05.160 14:38:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:05.160 14:38:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:05.160 14:38:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:05.160 14:38:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:05.160 14:38:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.160 14:38:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.160 14:38:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:05.160 14:38:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:05.160 14:38:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.160 14:38:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.160 14:38:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.160 14:38:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.160 14:38:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.160 14:38:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.160 14:38:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.160 14:38:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.160 14:38:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:05.160 14:38:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:05.160 Cannot find device "nvmf_tgt_br" 00:24:05.160 14:38:11 -- nvmf/common.sh@154 -- # true 00:24:05.160 14:38:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.160 Cannot find device "nvmf_tgt_br2" 00:24:05.160 14:38:11 -- nvmf/common.sh@155 -- # true 00:24:05.160 14:38:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:05.160 14:38:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:05.160 Cannot find device "nvmf_tgt_br" 00:24:05.160 14:38:12 -- nvmf/common.sh@157 -- # true 00:24:05.160 14:38:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:05.160 Cannot find device "nvmf_tgt_br2" 00:24:05.160 14:38:12 -- nvmf/common.sh@158 -- # true 00:24:05.160 14:38:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:05.160 14:38:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:05.160 14:38:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.160 14:38:12 -- nvmf/common.sh@161 -- # true 00:24:05.160 14:38:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.160 14:38:12 -- nvmf/common.sh@162 -- # true 00:24:05.160 14:38:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.160 14:38:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.160 14:38:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.160 14:38:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.420 14:38:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.420 14:38:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.420 14:38:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.420 14:38:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:05.420 14:38:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:05.420 14:38:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:05.420 14:38:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:05.420 14:38:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:05.420 14:38:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:05.420 14:38:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.420 14:38:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.420 14:38:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.420 14:38:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:05.420 14:38:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:05.420 14:38:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.420 14:38:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.420 14:38:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.420 14:38:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.420 14:38:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.420 14:38:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:05.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:24:05.420 00:24:05.420 --- 10.0.0.2 ping statistics --- 00:24:05.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.420 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:05.420 14:38:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:05.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:05.420 00:24:05.420 --- 10.0.0.3 ping statistics --- 00:24:05.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.420 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:05.420 14:38:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:24:05.420 00:24:05.420 --- 10.0.0.1 ping statistics --- 00:24:05.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.420 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:05.420 14:38:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.420 14:38:12 -- nvmf/common.sh@421 -- # return 0 00:24:05.420 14:38:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:05.420 14:38:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.420 14:38:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:05.420 14:38:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:05.420 14:38:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.420 14:38:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:05.420 14:38:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:05.420 14:38:12 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:05.420 14:38:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:05.420 14:38:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.420 14:38:12 -- common/autotest_common.sh@10 -- # set +x 00:24:05.420 14:38:12 -- nvmf/common.sh@469 -- # nvmfpid=83023 00:24:05.420 14:38:12 -- nvmf/common.sh@470 -- # waitforlisten 83023 00:24:05.420 14:38:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:05.420 14:38:12 -- common/autotest_common.sh@829 -- # '[' -z 83023 ']' 00:24:05.420 14:38:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.420 14:38:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.420 14:38:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.420 14:38:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.420 14:38:12 -- common/autotest_common.sh@10 -- # set +x 00:24:05.678 [2024-12-06 14:38:12.410767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:05.678 [2024-12-06 14:38:12.410905] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.678 [2024-12-06 14:38:12.548124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.938 [2024-12-06 14:38:12.664714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:05.938 [2024-12-06 14:38:12.664884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.938 [2024-12-06 14:38:12.664897] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.938 [2024-12-06 14:38:12.664907] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.938 [2024-12-06 14:38:12.664940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.516 14:38:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.516 14:38:13 -- common/autotest_common.sh@862 -- # return 0 00:24:06.516 14:38:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.516 14:38:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 14:38:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.516 14:38:13 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 [2024-12-06 14:38:13.393217] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.516 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.516 14:38:13 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 null0 00:24:06.516 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.516 14:38:13 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.516 14:38:13 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.516 14:38:13 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b1ea89cf86b34f62a37cf20e7a0d6b6a 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.516 14:38:13 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.516 [2024-12-06 14:38:13.433332] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.516 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.516 14:38:13 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:06.516 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.516 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.775 nvme0n1 00:24:06.775 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.775 14:38:13 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:06.775 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.775 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.775 [ 00:24:06.775 { 00:24:06.775 "aliases": [ 00:24:06.775 "b1ea89cf-86b3-4f62-a37c-f20e7a0d6b6a" 00:24:06.775 ], 00:24:06.775 "assigned_rate_limits": { 00:24:06.775 "r_mbytes_per_sec": 0, 00:24:06.775 "rw_ios_per_sec": 0, 00:24:06.775 "rw_mbytes_per_sec": 0, 00:24:06.775 "w_mbytes_per_sec": 0 00:24:06.775 }, 00:24:06.775 "block_size": 512, 00:24:06.775 "claimed": false, 00:24:06.775 "driver_specific": { 00:24:06.775 "mp_policy": "active_passive", 00:24:06.775 "nvme": [ 00:24:06.775 { 00:24:06.775 "ctrlr_data": { 00:24:06.775 "ana_reporting": false, 00:24:06.775 "cntlid": 1, 00:24:06.775 "firmware_revision": "24.01.1", 00:24:06.775 "model_number": "SPDK bdev Controller", 00:24:06.775 "multi_ctrlr": true, 00:24:06.775 "oacs": { 00:24:06.775 "firmware": 0, 00:24:06.775 "format": 0, 00:24:06.775 "ns_manage": 0, 00:24:06.775 "security": 0 00:24:06.775 }, 00:24:06.775 "serial_number": "00000000000000000000", 00:24:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.775 "vendor_id": "0x8086" 00:24:06.775 }, 00:24:06.775 "ns_data": { 00:24:06.775 "can_share": true, 00:24:06.775 "id": 1 00:24:06.775 }, 00:24:06.775 "trid": { 00:24:06.775 "adrfam": "IPv4", 00:24:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.775 "traddr": "10.0.0.2", 00:24:06.775 "trsvcid": "4420", 00:24:06.775 "trtype": "TCP" 00:24:06.775 }, 00:24:06.775 "vs": { 00:24:06.775 "nvme_version": "1.3" 00:24:06.775 } 00:24:06.775 } 00:24:06.775 ] 00:24:06.775 }, 00:24:06.775 "name": "nvme0n1", 00:24:06.775 "num_blocks": 2097152, 00:24:06.775 "product_name": "NVMe disk", 00:24:06.775 "supported_io_types": { 00:24:06.775 "abort": true, 00:24:06.775 "compare": true, 00:24:06.775 "compare_and_write": true, 00:24:06.775 "flush": true, 00:24:06.775 "nvme_admin": true, 00:24:06.775 "nvme_io": true, 00:24:06.775 "read": true, 00:24:06.775 "reset": true, 00:24:06.775 "unmap": false, 00:24:06.775 "write": true, 00:24:06.775 "write_zeroes": true 00:24:06.775 }, 00:24:06.775 "uuid": "b1ea89cf-86b3-4f62-a37c-f20e7a0d6b6a", 00:24:06.775 "zoned": false 00:24:06.775 } 00:24:06.775 ] 00:24:06.775 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.775 14:38:13 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:06.775 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.775 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:06.775 [2024-12-06 14:38:13.711061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:06.775 [2024-12-06 14:38:13.711171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e1f90 (9): Bad file descriptor 00:24:07.034 [2024-12-06 14:38:13.843634] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:07.034 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.034 14:38:13 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.034 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.034 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:07.034 [ 00:24:07.034 { 00:24:07.034 "aliases": [ 00:24:07.034 "b1ea89cf-86b3-4f62-a37c-f20e7a0d6b6a" 00:24:07.034 ], 00:24:07.034 "assigned_rate_limits": { 00:24:07.034 "r_mbytes_per_sec": 0, 00:24:07.034 "rw_ios_per_sec": 0, 00:24:07.034 "rw_mbytes_per_sec": 0, 00:24:07.034 "w_mbytes_per_sec": 0 00:24:07.034 }, 00:24:07.034 "block_size": 512, 00:24:07.034 "claimed": false, 00:24:07.034 "driver_specific": { 00:24:07.034 "mp_policy": "active_passive", 00:24:07.034 "nvme": [ 00:24:07.034 { 00:24:07.034 "ctrlr_data": { 00:24:07.034 "ana_reporting": false, 00:24:07.034 "cntlid": 2, 00:24:07.034 "firmware_revision": "24.01.1", 00:24:07.034 "model_number": "SPDK bdev Controller", 00:24:07.034 "multi_ctrlr": true, 00:24:07.034 "oacs": { 00:24:07.034 "firmware": 0, 00:24:07.034 "format": 0, 00:24:07.034 "ns_manage": 0, 00:24:07.034 "security": 0 00:24:07.034 }, 00:24:07.034 "serial_number": "00000000000000000000", 00:24:07.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.034 "vendor_id": "0x8086" 00:24:07.034 }, 00:24:07.034 "ns_data": { 00:24:07.034 "can_share": true, 00:24:07.034 "id": 1 00:24:07.034 }, 00:24:07.034 "trid": { 00:24:07.034 "adrfam": "IPv4", 00:24:07.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.034 "traddr": "10.0.0.2", 00:24:07.034 "trsvcid": "4420", 00:24:07.034 "trtype": "TCP" 00:24:07.034 }, 00:24:07.034 "vs": { 00:24:07.034 "nvme_version": "1.3" 00:24:07.034 } 00:24:07.034 } 00:24:07.034 ] 00:24:07.034 }, 00:24:07.034 "name": "nvme0n1", 00:24:07.034 "num_blocks": 2097152, 00:24:07.034 "product_name": "NVMe disk", 00:24:07.034 "supported_io_types": { 00:24:07.034 "abort": true, 00:24:07.034 "compare": true, 00:24:07.034 "compare_and_write": true, 00:24:07.034 "flush": true, 00:24:07.034 "nvme_admin": true, 00:24:07.034 "nvme_io": true, 00:24:07.034 "read": true, 00:24:07.034 "reset": true, 00:24:07.034 "unmap": false, 00:24:07.034 "write": true, 00:24:07.034 "write_zeroes": true 00:24:07.034 }, 00:24:07.034 "uuid": "b1ea89cf-86b3-4f62-a37c-f20e7a0d6b6a", 00:24:07.034 "zoned": false 00:24:07.034 } 00:24:07.034 ] 00:24:07.034 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.034 14:38:13 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.034 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.034 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:07.034 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.034 14:38:13 -- host/async_init.sh@53 -- # mktemp 00:24:07.034 14:38:13 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.x7E2mjfsuG 00:24:07.034 14:38:13 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:07.034 14:38:13 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.x7E2mjfsuG 00:24:07.034 14:38:13 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:07.035 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.035 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:07.035 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.035 14:38:13 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:07.035 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.035 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:07.035 [2024-12-06 14:38:13.911212] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.035 [2024-12-06 14:38:13.911394] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:07.035 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.035 14:38:13 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x7E2mjfsuG 00:24:07.035 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.035 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:07.035 14:38:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.035 14:38:13 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x7E2mjfsuG 00:24:07.035 14:38:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.035 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:24:07.035 [2024-12-06 14:38:13.927205] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.035 nvme0n1 00:24:07.035 14:38:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.035 14:38:14 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.035 14:38:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.035 14:38:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.293 [ 00:24:07.293 { 00:24:07.293 "aliases": [ 00:24:07.293 "b1ea89cf-86b3-4f62-a37c-f20e7a0d6b6a" 00:24:07.293 ], 00:24:07.293 "assigned_rate_limits": { 00:24:07.293 "r_mbytes_per_sec": 0, 00:24:07.293 "rw_ios_per_sec": 0, 00:24:07.294 "rw_mbytes_per_sec": 0, 00:24:07.294 "w_mbytes_per_sec": 0 00:24:07.294 }, 00:24:07.294 "block_size": 512, 00:24:07.294 "claimed": false, 00:24:07.294 "driver_specific": { 00:24:07.294 "mp_policy": "active_passive", 00:24:07.294 "nvme": [ 00:24:07.294 { 00:24:07.294 "ctrlr_data": { 00:24:07.294 "ana_reporting": false, 00:24:07.294 "cntlid": 3, 00:24:07.294 "firmware_revision": "24.01.1", 00:24:07.294 "model_number": "SPDK bdev Controller", 00:24:07.294 "multi_ctrlr": true, 00:24:07.294 "oacs": { 00:24:07.294 "firmware": 0, 00:24:07.294 "format": 0, 00:24:07.294 "ns_manage": 0, 00:24:07.294 "security": 0 00:24:07.294 }, 00:24:07.294 "serial_number": "00000000000000000000", 00:24:07.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.294 "vendor_id": "0x8086" 00:24:07.294 }, 00:24:07.294 "ns_data": { 00:24:07.294 "can_share": true, 00:24:07.294 "id": 1 00:24:07.294 }, 00:24:07.294 "trid": { 00:24:07.294 "adrfam": "IPv4", 00:24:07.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.294 "traddr": "10.0.0.2", 00:24:07.294 "trsvcid": "4421", 00:24:07.294 "trtype": "TCP" 00:24:07.294 }, 00:24:07.294 "vs": { 00:24:07.294 "nvme_version": "1.3" 00:24:07.294 } 00:24:07.294 } 00:24:07.294 ] 00:24:07.294 }, 00:24:07.294 "name": "nvme0n1", 00:24:07.294 "num_blocks": 2097152, 00:24:07.294 "product_name": "NVMe disk", 00:24:07.294 "supported_io_types": { 00:24:07.294 "abort": true, 00:24:07.294 "compare": true, 00:24:07.294 "compare_and_write": true, 00:24:07.294 "flush": true, 00:24:07.294 "nvme_admin": true, 00:24:07.294 "nvme_io": true, 00:24:07.294 "read": true, 00:24:07.294 "reset": true, 00:24:07.294 "unmap": false, 00:24:07.294 "write": true, 00:24:07.294 "write_zeroes": true 00:24:07.294 }, 00:24:07.294 "uuid": "b1ea89cf-86b3-4f62-a37c-f20e7a0d6b6a", 00:24:07.294 "zoned": false 00:24:07.294 } 00:24:07.294 ] 00:24:07.294 14:38:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.294 14:38:14 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.294 14:38:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.294 14:38:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.294 14:38:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.294 14:38:14 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.x7E2mjfsuG 00:24:07.294 14:38:14 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:07.294 14:38:14 -- host/async_init.sh@78 -- # nvmftestfini 00:24:07.294 14:38:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:07.294 14:38:14 -- nvmf/common.sh@116 -- # sync 00:24:07.294 14:38:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:07.294 14:38:14 -- nvmf/common.sh@119 -- # set +e 00:24:07.294 14:38:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:07.294 14:38:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:07.294 rmmod nvme_tcp 00:24:07.294 rmmod nvme_fabrics 00:24:07.294 rmmod nvme_keyring 00:24:07.294 14:38:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:07.294 14:38:14 -- nvmf/common.sh@123 -- # set -e 00:24:07.294 14:38:14 -- nvmf/common.sh@124 -- # return 0 00:24:07.294 14:38:14 -- nvmf/common.sh@477 -- # '[' -n 83023 ']' 00:24:07.294 14:38:14 -- nvmf/common.sh@478 -- # killprocess 83023 00:24:07.294 14:38:14 -- common/autotest_common.sh@936 -- # '[' -z 83023 ']' 00:24:07.294 14:38:14 -- common/autotest_common.sh@940 -- # kill -0 83023 00:24:07.294 14:38:14 -- common/autotest_common.sh@941 -- # uname 00:24:07.294 14:38:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.294 14:38:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83023 00:24:07.294 14:38:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:07.294 14:38:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:07.294 killing process with pid 83023 00:24:07.294 14:38:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83023' 00:24:07.294 14:38:14 -- common/autotest_common.sh@955 -- # kill 83023 00:24:07.294 14:38:14 -- common/autotest_common.sh@960 -- # wait 83023 00:24:07.552 14:38:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:07.552 14:38:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:07.552 14:38:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:07.552 14:38:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.552 14:38:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:07.552 14:38:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.552 14:38:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.552 14:38:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.552 14:38:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:07.552 ************************************ 00:24:07.552 END TEST nvmf_async_init 00:24:07.552 ************************************ 00:24:07.552 00:24:07.552 real 0m2.705s 00:24:07.552 user 0m2.450s 00:24:07.552 sys 0m0.619s 00:24:07.552 14:38:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:07.552 14:38:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.552 14:38:14 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:07.552 14:38:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:07.552 14:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:07.552 14:38:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.552 ************************************ 00:24:07.552 START TEST dma 00:24:07.552 ************************************ 00:24:07.552 14:38:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:07.812 * Looking for test storage... 00:24:07.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:07.812 14:38:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:07.812 14:38:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:07.812 14:38:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:07.812 14:38:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:07.812 14:38:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:07.812 14:38:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:07.812 14:38:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:07.812 14:38:14 -- scripts/common.sh@335 -- # IFS=.-: 00:24:07.812 14:38:14 -- scripts/common.sh@335 -- # read -ra ver1 00:24:07.812 14:38:14 -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.812 14:38:14 -- scripts/common.sh@336 -- # read -ra ver2 00:24:07.812 14:38:14 -- scripts/common.sh@337 -- # local 'op=<' 00:24:07.812 14:38:14 -- scripts/common.sh@339 -- # ver1_l=2 00:24:07.812 14:38:14 -- scripts/common.sh@340 -- # ver2_l=1 00:24:07.812 14:38:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:07.812 14:38:14 -- scripts/common.sh@343 -- # case "$op" in 00:24:07.812 14:38:14 -- scripts/common.sh@344 -- # : 1 00:24:07.812 14:38:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:07.812 14:38:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.812 14:38:14 -- scripts/common.sh@364 -- # decimal 1 00:24:07.812 14:38:14 -- scripts/common.sh@352 -- # local d=1 00:24:07.812 14:38:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.812 14:38:14 -- scripts/common.sh@354 -- # echo 1 00:24:07.812 14:38:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:07.812 14:38:14 -- scripts/common.sh@365 -- # decimal 2 00:24:07.812 14:38:14 -- scripts/common.sh@352 -- # local d=2 00:24:07.812 14:38:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.812 14:38:14 -- scripts/common.sh@354 -- # echo 2 00:24:07.812 14:38:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:07.812 14:38:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:07.812 14:38:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:07.812 14:38:14 -- scripts/common.sh@367 -- # return 0 00:24:07.812 14:38:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.812 14:38:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.812 --rc genhtml_branch_coverage=1 00:24:07.812 --rc genhtml_function_coverage=1 00:24:07.812 --rc genhtml_legend=1 00:24:07.812 --rc geninfo_all_blocks=1 00:24:07.812 --rc geninfo_unexecuted_blocks=1 00:24:07.812 00:24:07.812 ' 00:24:07.812 14:38:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.812 --rc genhtml_branch_coverage=1 00:24:07.812 --rc genhtml_function_coverage=1 00:24:07.812 --rc genhtml_legend=1 00:24:07.812 --rc geninfo_all_blocks=1 00:24:07.812 --rc geninfo_unexecuted_blocks=1 00:24:07.812 00:24:07.812 ' 00:24:07.812 14:38:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.812 --rc genhtml_branch_coverage=1 00:24:07.812 --rc genhtml_function_coverage=1 00:24:07.812 --rc genhtml_legend=1 00:24:07.812 --rc geninfo_all_blocks=1 00:24:07.812 --rc geninfo_unexecuted_blocks=1 00:24:07.812 00:24:07.812 ' 00:24:07.812 14:38:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.812 --rc genhtml_branch_coverage=1 00:24:07.812 --rc genhtml_function_coverage=1 00:24:07.812 --rc genhtml_legend=1 00:24:07.812 --rc geninfo_all_blocks=1 00:24:07.812 --rc geninfo_unexecuted_blocks=1 00:24:07.812 00:24:07.812 ' 00:24:07.812 14:38:14 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:07.812 14:38:14 -- nvmf/common.sh@7 -- # uname -s 00:24:07.812 14:38:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.812 14:38:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.812 14:38:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.812 14:38:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.812 14:38:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.812 14:38:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.812 14:38:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.812 14:38:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.812 14:38:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.812 14:38:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.812 14:38:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:07.812 14:38:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:07.812 14:38:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.812 14:38:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.812 14:38:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:07.812 14:38:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:07.812 14:38:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.812 14:38:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.812 14:38:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.812 14:38:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.812 14:38:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.812 14:38:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.812 14:38:14 -- paths/export.sh@5 -- # export PATH 00:24:07.812 14:38:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.812 14:38:14 -- nvmf/common.sh@46 -- # : 0 00:24:07.812 14:38:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:07.812 14:38:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:07.812 14:38:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:07.812 14:38:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.812 14:38:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.812 14:38:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:07.812 14:38:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:07.812 14:38:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:07.812 14:38:14 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:07.812 14:38:14 -- host/dma.sh@13 -- # exit 0 00:24:07.812 00:24:07.812 real 0m0.219s 00:24:07.812 user 0m0.129s 00:24:07.812 sys 0m0.096s 00:24:07.812 ************************************ 00:24:07.812 END TEST dma 00:24:07.812 ************************************ 00:24:07.812 14:38:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:07.812 14:38:14 -- common/autotest_common.sh@10 -- # set +x 00:24:08.072 14:38:14 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.072 14:38:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:08.072 14:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:08.072 14:38:14 -- common/autotest_common.sh@10 -- # set +x 00:24:08.072 ************************************ 00:24:08.072 START TEST nvmf_identify 00:24:08.072 ************************************ 00:24:08.072 14:38:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.072 * Looking for test storage... 00:24:08.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.072 14:38:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:08.072 14:38:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:08.072 14:38:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:08.072 14:38:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:08.072 14:38:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:08.072 14:38:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:08.072 14:38:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:08.072 14:38:14 -- scripts/common.sh@335 -- # IFS=.-: 00:24:08.072 14:38:14 -- scripts/common.sh@335 -- # read -ra ver1 00:24:08.072 14:38:14 -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.072 14:38:14 -- scripts/common.sh@336 -- # read -ra ver2 00:24:08.072 14:38:14 -- scripts/common.sh@337 -- # local 'op=<' 00:24:08.072 14:38:14 -- scripts/common.sh@339 -- # ver1_l=2 00:24:08.072 14:38:14 -- scripts/common.sh@340 -- # ver2_l=1 00:24:08.072 14:38:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:08.072 14:38:14 -- scripts/common.sh@343 -- # case "$op" in 00:24:08.072 14:38:14 -- scripts/common.sh@344 -- # : 1 00:24:08.072 14:38:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:08.072 14:38:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.072 14:38:14 -- scripts/common.sh@364 -- # decimal 1 00:24:08.072 14:38:14 -- scripts/common.sh@352 -- # local d=1 00:24:08.072 14:38:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.072 14:38:14 -- scripts/common.sh@354 -- # echo 1 00:24:08.072 14:38:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:08.072 14:38:14 -- scripts/common.sh@365 -- # decimal 2 00:24:08.072 14:38:14 -- scripts/common.sh@352 -- # local d=2 00:24:08.072 14:38:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.072 14:38:14 -- scripts/common.sh@354 -- # echo 2 00:24:08.072 14:38:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:08.072 14:38:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:08.072 14:38:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:08.072 14:38:14 -- scripts/common.sh@367 -- # return 0 00:24:08.072 14:38:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.072 14:38:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:08.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.072 --rc genhtml_branch_coverage=1 00:24:08.072 --rc genhtml_function_coverage=1 00:24:08.072 --rc genhtml_legend=1 00:24:08.072 --rc geninfo_all_blocks=1 00:24:08.072 --rc geninfo_unexecuted_blocks=1 00:24:08.072 00:24:08.072 ' 00:24:08.072 14:38:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:08.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.072 --rc genhtml_branch_coverage=1 00:24:08.072 --rc genhtml_function_coverage=1 00:24:08.072 --rc genhtml_legend=1 00:24:08.072 --rc geninfo_all_blocks=1 00:24:08.072 --rc geninfo_unexecuted_blocks=1 00:24:08.072 00:24:08.072 ' 00:24:08.072 14:38:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:08.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.072 --rc genhtml_branch_coverage=1 00:24:08.072 --rc genhtml_function_coverage=1 00:24:08.072 --rc genhtml_legend=1 00:24:08.072 --rc geninfo_all_blocks=1 00:24:08.072 --rc geninfo_unexecuted_blocks=1 00:24:08.072 00:24:08.072 ' 00:24:08.072 14:38:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:08.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.072 --rc genhtml_branch_coverage=1 00:24:08.072 --rc genhtml_function_coverage=1 00:24:08.072 --rc genhtml_legend=1 00:24:08.072 --rc geninfo_all_blocks=1 00:24:08.072 --rc geninfo_unexecuted_blocks=1 00:24:08.072 00:24:08.072 ' 00:24:08.072 14:38:14 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.072 14:38:14 -- nvmf/common.sh@7 -- # uname -s 00:24:08.072 14:38:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.072 14:38:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.072 14:38:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.072 14:38:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.072 14:38:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.072 14:38:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.072 14:38:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.072 14:38:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.072 14:38:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.072 14:38:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.072 14:38:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:08.072 14:38:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:08.072 14:38:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.072 14:38:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.072 14:38:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.072 14:38:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.072 14:38:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.072 14:38:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.072 14:38:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.072 14:38:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.073 14:38:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.073 14:38:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.073 14:38:14 -- paths/export.sh@5 -- # export PATH 00:24:08.073 14:38:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.073 14:38:14 -- nvmf/common.sh@46 -- # : 0 00:24:08.073 14:38:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.073 14:38:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.073 14:38:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.073 14:38:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.073 14:38:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.073 14:38:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.073 14:38:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.073 14:38:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.073 14:38:14 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.073 14:38:14 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.073 14:38:14 -- host/identify.sh@14 -- # nvmftestinit 00:24:08.073 14:38:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:08.073 14:38:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.073 14:38:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:08.073 14:38:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:08.073 14:38:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:08.073 14:38:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.073 14:38:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.073 14:38:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.073 14:38:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:08.073 14:38:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:08.073 14:38:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:08.073 14:38:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:08.073 14:38:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:08.073 14:38:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:08.073 14:38:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.073 14:38:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.073 14:38:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:08.073 14:38:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:08.073 14:38:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:08.073 14:38:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:08.073 14:38:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:08.073 14:38:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.073 14:38:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:08.073 14:38:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:08.073 14:38:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:08.073 14:38:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:08.073 14:38:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:08.073 14:38:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:08.073 Cannot find device "nvmf_tgt_br" 00:24:08.073 14:38:15 -- nvmf/common.sh@154 -- # true 00:24:08.073 14:38:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.342 Cannot find device "nvmf_tgt_br2" 00:24:08.342 14:38:15 -- nvmf/common.sh@155 -- # true 00:24:08.342 14:38:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:08.342 14:38:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:08.342 Cannot find device "nvmf_tgt_br" 00:24:08.342 14:38:15 -- nvmf/common.sh@157 -- # true 00:24:08.342 14:38:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:08.342 Cannot find device "nvmf_tgt_br2" 00:24:08.342 14:38:15 -- nvmf/common.sh@158 -- # true 00:24:08.342 14:38:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:08.342 14:38:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:08.342 14:38:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.342 14:38:15 -- nvmf/common.sh@161 -- # true 00:24:08.342 14:38:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:08.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.342 14:38:15 -- nvmf/common.sh@162 -- # true 00:24:08.342 14:38:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:08.342 14:38:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:08.342 14:38:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:08.342 14:38:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:08.342 14:38:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:08.342 14:38:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:08.342 14:38:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:08.342 14:38:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:08.342 14:38:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:08.342 14:38:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:08.342 14:38:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:08.342 14:38:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:08.342 14:38:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:08.342 14:38:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:08.342 14:38:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:08.342 14:38:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:08.342 14:38:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:08.342 14:38:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:08.342 14:38:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:08.342 14:38:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:08.342 14:38:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:08.600 14:38:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:08.600 14:38:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:08.600 14:38:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:08.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:24:08.600 00:24:08.600 --- 10.0.0.2 ping statistics --- 00:24:08.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.600 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:08.600 14:38:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:08.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:08.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:24:08.600 00:24:08.600 --- 10.0.0.3 ping statistics --- 00:24:08.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.600 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:08.600 14:38:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:08.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:08.600 00:24:08.600 --- 10.0.0.1 ping statistics --- 00:24:08.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.600 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:08.600 14:38:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.600 14:38:15 -- nvmf/common.sh@421 -- # return 0 00:24:08.600 14:38:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:08.600 14:38:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.600 14:38:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:08.600 14:38:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:08.600 14:38:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.600 14:38:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:08.600 14:38:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:08.600 14:38:15 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:08.600 14:38:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.600 14:38:15 -- common/autotest_common.sh@10 -- # set +x 00:24:08.600 14:38:15 -- host/identify.sh@19 -- # nvmfpid=83311 00:24:08.600 14:38:15 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.600 14:38:15 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.600 14:38:15 -- host/identify.sh@23 -- # waitforlisten 83311 00:24:08.600 14:38:15 -- common/autotest_common.sh@829 -- # '[' -z 83311 ']' 00:24:08.600 14:38:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.600 14:38:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.600 14:38:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.600 14:38:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.600 14:38:15 -- common/autotest_common.sh@10 -- # set +x 00:24:08.600 [2024-12-06 14:38:15.425276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:08.600 [2024-12-06 14:38:15.425370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.857 [2024-12-06 14:38:15.566033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.857 [2024-12-06 14:38:15.705780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:08.857 [2024-12-06 14:38:15.706244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.857 [2024-12-06 14:38:15.706301] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.857 [2024-12-06 14:38:15.706538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.857 [2024-12-06 14:38:15.706817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.858 [2024-12-06 14:38:15.707936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.858 [2024-12-06 14:38:15.708055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.858 [2024-12-06 14:38:15.708063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.792 14:38:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.792 14:38:16 -- common/autotest_common.sh@862 -- # return 0 00:24:09.792 14:38:16 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.792 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.792 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.792 [2024-12-06 14:38:16.478339] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.792 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.792 14:38:16 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:09.792 14:38:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.792 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.792 14:38:16 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.792 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.792 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.792 Malloc0 00:24:09.792 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.792 14:38:16 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.792 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.792 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.792 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.793 14:38:16 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:09.793 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.793 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.793 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.793 14:38:16 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.793 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.793 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.793 [2024-12-06 14:38:16.591524] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.793 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.793 14:38:16 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:09.793 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.793 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.793 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.793 14:38:16 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:09.793 14:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.793 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.793 [2024-12-06 14:38:16.607221] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:09.793 [ 00:24:09.793 { 00:24:09.793 "allow_any_host": true, 00:24:09.793 "hosts": [], 00:24:09.793 "listen_addresses": [ 00:24:09.793 { 00:24:09.793 "adrfam": "IPv4", 00:24:09.793 "traddr": "10.0.0.2", 00:24:09.793 "transport": "TCP", 00:24:09.793 "trsvcid": "4420", 00:24:09.793 "trtype": "TCP" 00:24:09.793 } 00:24:09.793 ], 00:24:09.793 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.793 "subtype": "Discovery" 00:24:09.793 }, 00:24:09.793 { 00:24:09.793 "allow_any_host": true, 00:24:09.793 "hosts": [], 00:24:09.793 "listen_addresses": [ 00:24:09.793 { 00:24:09.793 "adrfam": "IPv4", 00:24:09.793 "traddr": "10.0.0.2", 00:24:09.793 "transport": "TCP", 00:24:09.793 "trsvcid": "4420", 00:24:09.793 "trtype": "TCP" 00:24:09.793 } 00:24:09.793 ], 00:24:09.793 "max_cntlid": 65519, 00:24:09.793 "max_namespaces": 32, 00:24:09.793 "min_cntlid": 1, 00:24:09.793 "model_number": "SPDK bdev Controller", 00:24:09.793 "namespaces": [ 00:24:09.793 { 00:24:09.793 "bdev_name": "Malloc0", 00:24:09.793 "eui64": "ABCDEF0123456789", 00:24:09.793 "name": "Malloc0", 00:24:09.793 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:09.793 "nsid": 1, 00:24:09.793 "uuid": "06eb157b-eba8-40dd-9006-5963d5736f20" 00:24:09.793 } 00:24:09.793 ], 00:24:09.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.793 "serial_number": "SPDK00000000000001", 00:24:09.793 "subtype": "NVMe" 00:24:09.793 } 00:24:09.793 ] 00:24:09.793 14:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.793 14:38:16 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:09.793 [2024-12-06 14:38:16.646662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:09.793 [2024-12-06 14:38:16.646864] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83364 ] 00:24:10.055 [2024-12-06 14:38:16.779754] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:10.055 [2024-12-06 14:38:16.779831] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:10.055 [2024-12-06 14:38:16.779838] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:10.055 [2024-12-06 14:38:16.779848] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:10.055 [2024-12-06 14:38:16.779856] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:10.055 [2024-12-06 14:38:16.779975] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:10.055 [2024-12-06 14:38:16.780043] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11c7d30 0 00:24:10.055 [2024-12-06 14:38:16.794488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:10.055 [2024-12-06 14:38:16.794513] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:10.055 [2024-12-06 14:38:16.794535] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:10.056 [2024-12-06 14:38:16.794539] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:10.056 [2024-12-06 14:38:16.794586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.794593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.794597] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.794611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:10.056 [2024-12-06 14:38:16.794641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.802485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.802507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.802512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.802544] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:10.056 [2024-12-06 14:38:16.802552] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:10.056 [2024-12-06 14:38:16.802557] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:10.056 [2024-12-06 14:38:16.802573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.802590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.802618] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.802689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.802696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.802700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.802710] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:10.056 [2024-12-06 14:38:16.802717] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:10.056 [2024-12-06 14:38:16.802725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.802739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.802757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.802849] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.802857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.802860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.802871] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:10.056 [2024-12-06 14:38:16.802880] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:10.056 [2024-12-06 14:38:16.802888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.802903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.802921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.802973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.802980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.802984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.802988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.802994] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:10.056 [2024-12-06 14:38:16.803004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.803020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.803038] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.803089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.803095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.803099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.803109] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:10.056 [2024-12-06 14:38:16.803114] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:10.056 [2024-12-06 14:38:16.803122] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:10.056 [2024-12-06 14:38:16.803227] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:10.056 [2024-12-06 14:38:16.803239] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:10.056 [2024-12-06 14:38:16.803249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803257] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.803265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.803284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.803344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.803355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.803359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.803370] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:10.056 [2024-12-06 14:38:16.803380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.803396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.803443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.803510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.803517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.056 [2024-12-06 14:38:16.803521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.056 [2024-12-06 14:38:16.803531] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:10.056 [2024-12-06 14:38:16.803536] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:10.056 [2024-12-06 14:38:16.803545] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:10.056 [2024-12-06 14:38:16.803562] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:10.056 [2024-12-06 14:38:16.803573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.056 [2024-12-06 14:38:16.803590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.056 [2024-12-06 14:38:16.803611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.056 [2024-12-06 14:38:16.803710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.056 [2024-12-06 14:38:16.803717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.056 [2024-12-06 14:38:16.803721] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803725] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c7d30): datao=0, datal=4096, cccid=0 00:24:10.056 [2024-12-06 14:38:16.803730] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1225f30) on tqpair(0x11c7d30): expected_datao=0, payload_size=4096 00:24:10.056 [2024-12-06 14:38:16.803740] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803745] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.056 [2024-12-06 14:38:16.803753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.056 [2024-12-06 14:38:16.803760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.057 [2024-12-06 14:38:16.803763] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.057 [2024-12-06 14:38:16.803777] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:10.057 [2024-12-06 14:38:16.803783] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:10.057 [2024-12-06 14:38:16.803788] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:10.057 [2024-12-06 14:38:16.803808] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:10.057 [2024-12-06 14:38:16.803813] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:10.057 [2024-12-06 14:38:16.803818] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:10.057 [2024-12-06 14:38:16.803831] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:10.057 [2024-12-06 14:38:16.803840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.803856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.057 [2024-12-06 14:38:16.803876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.057 [2024-12-06 14:38:16.803942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.057 [2024-12-06 14:38:16.803949] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.057 [2024-12-06 14:38:16.803953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1225f30) on tqpair=0x11c7d30 00:24:10.057 [2024-12-06 14:38:16.803972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.803987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.057 [2024-12-06 14:38:16.803993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.803997] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804001] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.057 [2024-12-06 14:38:16.804013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804016] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804020] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.057 [2024-12-06 14:38:16.804032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804039] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.057 [2024-12-06 14:38:16.804050] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:10.057 [2024-12-06 14:38:16.804063] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:10.057 [2024-12-06 14:38:16.804071] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804075] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804078] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.057 [2024-12-06 14:38:16.804105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225f30, cid 0, qid 0 00:24:10.057 [2024-12-06 14:38:16.804112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226090, cid 1, qid 0 00:24:10.057 [2024-12-06 14:38:16.804117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12261f0, cid 2, qid 0 00:24:10.057 [2024-12-06 14:38:16.804122] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.057 [2024-12-06 14:38:16.804127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12264b0, cid 4, qid 0 00:24:10.057 [2024-12-06 14:38:16.804224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.057 [2024-12-06 14:38:16.804231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.057 [2024-12-06 14:38:16.804234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12264b0) on tqpair=0x11c7d30 00:24:10.057 [2024-12-06 14:38:16.804245] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:10.057 [2024-12-06 14:38:16.804250] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:10.057 [2024-12-06 14:38:16.804261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.057 [2024-12-06 14:38:16.804295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12264b0, cid 4, qid 0 00:24:10.057 [2024-12-06 14:38:16.804358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.057 [2024-12-06 14:38:16.804365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.057 [2024-12-06 14:38:16.804369] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804373] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c7d30): datao=0, datal=4096, cccid=4 00:24:10.057 [2024-12-06 14:38:16.804377] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12264b0) on tqpair(0x11c7d30): expected_datao=0, payload_size=4096 00:24:10.057 [2024-12-06 14:38:16.804385] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804389] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.057 [2024-12-06 14:38:16.804403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.057 [2024-12-06 14:38:16.804407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804411] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12264b0) on tqpair=0x11c7d30 00:24:10.057 [2024-12-06 14:38:16.804425] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:10.057 [2024-12-06 14:38:16.804462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.057 [2024-12-06 14:38:16.804488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804492] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.804501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.057 [2024-12-06 14:38:16.804528] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12264b0, cid 4, qid 0 00:24:10.057 [2024-12-06 14:38:16.804535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226610, cid 5, qid 0 00:24:10.057 [2024-12-06 14:38:16.804636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.057 [2024-12-06 14:38:16.804644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.057 [2024-12-06 14:38:16.804647] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804651] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c7d30): datao=0, datal=1024, cccid=4 00:24:10.057 [2024-12-06 14:38:16.804656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12264b0) on tqpair(0x11c7d30): expected_datao=0, payload_size=1024 00:24:10.057 [2024-12-06 14:38:16.804663] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804667] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.057 [2024-12-06 14:38:16.804679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.057 [2024-12-06 14:38:16.804683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.804686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226610) on tqpair=0x11c7d30 00:24:10.057 [2024-12-06 14:38:16.849434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.057 [2024-12-06 14:38:16.849456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.057 [2024-12-06 14:38:16.849477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.849481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12264b0) on tqpair=0x11c7d30 00:24:10.057 [2024-12-06 14:38:16.849502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.849508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.057 [2024-12-06 14:38:16.849511] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c7d30) 00:24:10.057 [2024-12-06 14:38:16.849520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.057 [2024-12-06 14:38:16.849551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12264b0, cid 4, qid 0 00:24:10.057 [2024-12-06 14:38:16.849629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.057 [2024-12-06 14:38:16.849635] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.058 [2024-12-06 14:38:16.849639] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849652] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c7d30): datao=0, datal=3072, cccid=4 00:24:10.058 [2024-12-06 14:38:16.849673] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12264b0) on tqpair(0x11c7d30): expected_datao=0, payload_size=3072 00:24:10.058 [2024-12-06 14:38:16.849715] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849721] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.058 [2024-12-06 14:38:16.849737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.058 [2024-12-06 14:38:16.849741] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12264b0) on tqpair=0x11c7d30 00:24:10.058 [2024-12-06 14:38:16.849757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c7d30) 00:24:10.058 [2024-12-06 14:38:16.849774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.058 [2024-12-06 14:38:16.849803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12264b0, cid 4, qid 0 00:24:10.058 [2024-12-06 14:38:16.849881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.058 [2024-12-06 14:38:16.849888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.058 [2024-12-06 14:38:16.849892] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849896] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c7d30): datao=0, datal=8, cccid=4 00:24:10.058 [2024-12-06 14:38:16.849901] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12264b0) on tqpair(0x11c7d30): expected_datao=0, payload_size=8 00:24:10.058 [2024-12-06 14:38:16.849908] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.849923] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.058 ===================================================== 00:24:10.058 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:10.058 ===================================================== 00:24:10.058 Controller Capabilities/Features 00:24:10.058 ================================ 00:24:10.058 Vendor ID: 0000 00:24:10.058 Subsystem Vendor ID: 0000 00:24:10.058 Serial Number: .................... 00:24:10.058 Model Number: ........................................ 00:24:10.058 Firmware Version: 24.01.1 00:24:10.058 Recommended Arb Burst: 0 00:24:10.058 IEEE OUI Identifier: 00 00 00 00:24:10.058 Multi-path I/O 00:24:10.058 May have multiple subsystem ports: No 00:24:10.058 May have multiple controllers: No 00:24:10.058 Associated with SR-IOV VF: No 00:24:10.058 Max Data Transfer Size: 131072 00:24:10.058 Max Number of Namespaces: 0 00:24:10.058 Max Number of I/O Queues: 1024 00:24:10.058 NVMe Specification Version (VS): 1.3 00:24:10.058 NVMe Specification Version (Identify): 1.3 00:24:10.058 Maximum Queue Entries: 128 00:24:10.058 Contiguous Queues Required: Yes 00:24:10.058 Arbitration Mechanisms Supported 00:24:10.058 Weighted Round Robin: Not Supported 00:24:10.058 Vendor Specific: Not Supported 00:24:10.058 Reset Timeout: 15000 ms 00:24:10.058 Doorbell Stride: 4 bytes 00:24:10.058 NVM Subsystem Reset: Not Supported 00:24:10.058 Command Sets Supported 00:24:10.058 NVM Command Set: Supported 00:24:10.058 Boot Partition: Not Supported 00:24:10.058 Memory Page Size Minimum: 4096 bytes 00:24:10.058 Memory Page Size Maximum: 4096 bytes 00:24:10.058 Persistent Memory Region: Not Supported 00:24:10.058 Optional Asynchronous Events Supported 00:24:10.058 Namespace Attribute Notices: Not Supported 00:24:10.058 Firmware Activation Notices: Not Supported 00:24:10.058 ANA Change Notices: Not Supported 00:24:10.058 PLE Aggregate Log Change Notices: Not Supported 00:24:10.058 LBA Status Info Alert Notices: Not Supported 00:24:10.058 EGE Aggregate Log Change Notices: Not Supported 00:24:10.058 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.058 Zone Descriptor Change Notices: Not Supported 00:24:10.058 Discovery Log Change Notices: Supported 00:24:10.058 Controller Attributes 00:24:10.058 128-bit Host Identifier: Not Supported 00:24:10.058 Non-Operational Permissive Mode: Not Supported 00:24:10.058 NVM Sets: Not Supported 00:24:10.058 Read Recovery Levels: Not Supported 00:24:10.058 Endurance Groups: Not Supported 00:24:10.058 Predictable Latency Mode: Not Supported 00:24:10.058 Traffic Based Keep ALive: Not Supported 00:24:10.058 Namespace Granularity: Not Supported 00:24:10.058 SQ Associations: Not Supported 00:24:10.058 UUID List: Not Supported 00:24:10.058 Multi-Domain Subsystem: Not Supported 00:24:10.058 Fixed Capacity Management: Not Supported 00:24:10.058 Variable Capacity Management: Not Supported 00:24:10.058 Delete Endurance Group: Not Supported 00:24:10.058 Delete NVM Set: Not Supported 00:24:10.058 Extended LBA Formats Supported: Not Supported 00:24:10.058 Flexible Data Placement Supported: Not Supported 00:24:10.058 00:24:10.058 Controller Memory Buffer Support 00:24:10.058 ================================ 00:24:10.058 Supported: No 00:24:10.058 00:24:10.058 Persistent Memory Region Support 00:24:10.058 ================================ 00:24:10.058 Supported: No 00:24:10.058 00:24:10.058 Admin Command Set Attributes 00:24:10.058 ============================ 00:24:10.058 Security Send/Receive: Not Supported 00:24:10.058 Format NVM: Not Supported 00:24:10.058 Firmware Activate/Download: Not Supported 00:24:10.058 Namespace Management: Not Supported 00:24:10.058 Device Self-Test: Not Supported 00:24:10.058 Directives: Not Supported 00:24:10.058 NVMe-MI: Not Supported 00:24:10.058 Virtualization Management: Not Supported 00:24:10.058 Doorbell Buffer Config: Not Supported 00:24:10.058 Get LBA Status Capability: Not Supported 00:24:10.058 Command & Feature Lockdown Capability: Not Supported 00:24:10.058 Abort Command Limit: 1 00:24:10.058 Async Event Request Limit: 4 00:24:10.058 Number of Firmware Slots: N/A 00:24:10.058 Firmware Slot 1 Read-Only: N/A 00:24:10.058 Fi[2024-12-06 14:38:16.892435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.058 [2024-12-06 14:38:16.892460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.058 [2024-12-06 14:38:16.892465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.058 [2024-12-06 14:38:16.892485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12264b0) on tqpair=0x11c7d30 00:24:10.058 rmware Activation Without Reset: N/A 00:24:10.058 Multiple Update Detection Support: N/A 00:24:10.058 Firmware Update Granularity: No Information Provided 00:24:10.058 Per-Namespace SMART Log: No 00:24:10.058 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.058 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:10.058 Command Effects Log Page: Not Supported 00:24:10.058 Get Log Page Extended Data: Supported 00:24:10.058 Telemetry Log Pages: Not Supported 00:24:10.058 Persistent Event Log Pages: Not Supported 00:24:10.058 Supported Log Pages Log Page: May Support 00:24:10.058 Commands Supported & Effects Log Page: Not Supported 00:24:10.058 Feature Identifiers & Effects Log Page:May Support 00:24:10.058 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.058 Data Area 4 for Telemetry Log: Not Supported 00:24:10.058 Error Log Page Entries Supported: 128 00:24:10.058 Keep Alive: Not Supported 00:24:10.058 00:24:10.058 NVM Command Set Attributes 00:24:10.058 ========================== 00:24:10.058 Submission Queue Entry Size 00:24:10.058 Max: 1 00:24:10.058 Min: 1 00:24:10.058 Completion Queue Entry Size 00:24:10.058 Max: 1 00:24:10.058 Min: 1 00:24:10.058 Number of Namespaces: 0 00:24:10.058 Compare Command: Not Supported 00:24:10.058 Write Uncorrectable Command: Not Supported 00:24:10.058 Dataset Management Command: Not Supported 00:24:10.058 Write Zeroes Command: Not Supported 00:24:10.058 Set Features Save Field: Not Supported 00:24:10.058 Reservations: Not Supported 00:24:10.058 Timestamp: Not Supported 00:24:10.058 Copy: Not Supported 00:24:10.058 Volatile Write Cache: Not Present 00:24:10.058 Atomic Write Unit (Normal): 1 00:24:10.058 Atomic Write Unit (PFail): 1 00:24:10.058 Atomic Compare & Write Unit: 1 00:24:10.058 Fused Compare & Write: Supported 00:24:10.058 Scatter-Gather List 00:24:10.058 SGL Command Set: Supported 00:24:10.058 SGL Keyed: Supported 00:24:10.058 SGL Bit Bucket Descriptor: Not Supported 00:24:10.058 SGL Metadata Pointer: Not Supported 00:24:10.058 Oversized SGL: Not Supported 00:24:10.058 SGL Metadata Address: Not Supported 00:24:10.058 SGL Offset: Supported 00:24:10.058 Transport SGL Data Block: Not Supported 00:24:10.058 Replay Protected Memory Block: Not Supported 00:24:10.058 00:24:10.058 Firmware Slot Information 00:24:10.058 ========================= 00:24:10.058 Active slot: 0 00:24:10.058 00:24:10.058 00:24:10.058 Error Log 00:24:10.058 ========= 00:24:10.059 00:24:10.059 Active Namespaces 00:24:10.059 ================= 00:24:10.059 Discovery Log Page 00:24:10.059 ================== 00:24:10.059 Generation Counter: 2 00:24:10.059 Number of Records: 2 00:24:10.059 Record Format: 0 00:24:10.059 00:24:10.059 Discovery Log Entry 0 00:24:10.059 ---------------------- 00:24:10.059 Transport Type: 3 (TCP) 00:24:10.059 Address Family: 1 (IPv4) 00:24:10.059 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:10.059 Entry Flags: 00:24:10.059 Duplicate Returned Information: 1 00:24:10.059 Explicit Persistent Connection Support for Discovery: 1 00:24:10.059 Transport Requirements: 00:24:10.059 Secure Channel: Not Required 00:24:10.059 Port ID: 0 (0x0000) 00:24:10.059 Controller ID: 65535 (0xffff) 00:24:10.059 Admin Max SQ Size: 128 00:24:10.059 Transport Service Identifier: 4420 00:24:10.059 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:10.059 Transport Address: 10.0.0.2 00:24:10.059 Discovery Log Entry 1 00:24:10.059 ---------------------- 00:24:10.059 Transport Type: 3 (TCP) 00:24:10.059 Address Family: 1 (IPv4) 00:24:10.059 Subsystem Type: 2 (NVM Subsystem) 00:24:10.059 Entry Flags: 00:24:10.059 Duplicate Returned Information: 0 00:24:10.059 Explicit Persistent Connection Support for Discovery: 0 00:24:10.059 Transport Requirements: 00:24:10.059 Secure Channel: Not Required 00:24:10.059 Port ID: 0 (0x0000) 00:24:10.059 Controller ID: 65535 (0xffff) 00:24:10.059 Admin Max SQ Size: 128 00:24:10.059 Transport Service Identifier: 4420 00:24:10.059 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:10.059 Transport Address: 10.0.0.2 [2024-12-06 14:38:16.892581] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:10.059 [2024-12-06 14:38:16.892597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.059 [2024-12-06 14:38:16.892604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.059 [2024-12-06 14:38:16.892610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.059 [2024-12-06 14:38:16.892616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.059 [2024-12-06 14:38:16.892626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.892642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.892667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.892722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.892728] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.892732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.892744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892767] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.892792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.892816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.892885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.892893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.892896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.892906] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:10.059 [2024-12-06 14:38:16.892912] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:10.059 [2024-12-06 14:38:16.892923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.892931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.892938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.892957] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.893014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.893020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.893024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.893039] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893044] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.893055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.893072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.893127] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.893134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.893137] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893141] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.893152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.893168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.893185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.893237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.893244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.893247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893251] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.893262] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.893277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.893294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.893362] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.893369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.893373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.893387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.059 [2024-12-06 14:38:16.893403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.059 [2024-12-06 14:38:16.893434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.059 [2024-12-06 14:38:16.893501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.059 [2024-12-06 14:38:16.893508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.059 [2024-12-06 14:38:16.893512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.059 [2024-12-06 14:38:16.893527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.059 [2024-12-06 14:38:16.893532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.893543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.893561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.893614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.893620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.893624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893628] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.893639] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.893680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.893699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.893758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.893765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.893768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893773] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.893784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.893800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.893818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.893870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.893877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.893880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.893896] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.893912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.893930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.893984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.893991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.893995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.893999] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.894010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.894026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.894044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.894099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.894105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.894109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.894124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.894140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.894158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.894227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.894233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.894237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.894252] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894256] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894260] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.894267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.894284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.894336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.894343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.894346] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.894361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894369] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.894376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.894394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.894453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.894462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.894465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.894481] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.894497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.894516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.894571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.894578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.894582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.060 [2024-12-06 14:38:16.894596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.060 [2024-12-06 14:38:16.894605] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.060 [2024-12-06 14:38:16.894612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.060 [2024-12-06 14:38:16.894629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.060 [2024-12-06 14:38:16.894684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.060 [2024-12-06 14:38:16.894690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.060 [2024-12-06 14:38:16.894694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.894709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.894724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.894741] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.894790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.894797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.894801] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894805] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.894815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.894831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.894848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.894905] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.894912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.894916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894919] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.894930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.894939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.894946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.894963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895191] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895489] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895505] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895539] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895736] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.061 [2024-12-06 14:38:16.895845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.061 [2024-12-06 14:38:16.895860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.061 [2024-12-06 14:38:16.895878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.061 [2024-12-06 14:38:16.895927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.061 [2024-12-06 14:38:16.895934] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.061 [2024-12-06 14:38:16.895937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.061 [2024-12-06 14:38:16.895942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.062 [2024-12-06 14:38:16.895953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.895957] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.895961] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.062 [2024-12-06 14:38:16.895968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.062 [2024-12-06 14:38:16.895986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.062 [2024-12-06 14:38:16.896041] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.062 [2024-12-06 14:38:16.896048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.062 [2024-12-06 14:38:16.896051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.062 [2024-12-06 14:38:16.896066] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896071] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.062 [2024-12-06 14:38:16.896082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.062 [2024-12-06 14:38:16.896099] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.062 [2024-12-06 14:38:16.896154] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.062 [2024-12-06 14:38:16.896161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.062 [2024-12-06 14:38:16.896165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.062 [2024-12-06 14:38:16.896179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896184] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.062 [2024-12-06 14:38:16.896195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.062 [2024-12-06 14:38:16.896212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.062 [2024-12-06 14:38:16.896269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.062 [2024-12-06 14:38:16.896275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.062 [2024-12-06 14:38:16.896279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.062 [2024-12-06 14:38:16.896294] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896298] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896302] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.062 [2024-12-06 14:38:16.896310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.062 [2024-12-06 14:38:16.896327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.062 [2024-12-06 14:38:16.896379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.062 [2024-12-06 14:38:16.896386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.062 [2024-12-06 14:38:16.896389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.896393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.062 [2024-12-06 14:38:16.896404] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.900425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.900432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c7d30) 00:24:10.062 [2024-12-06 14:38:16.900441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.062 [2024-12-06 14:38:16.900466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1226350, cid 3, qid 0 00:24:10.062 [2024-12-06 14:38:16.900522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.062 [2024-12-06 14:38:16.900529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.062 [2024-12-06 14:38:16.900533] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.062 [2024-12-06 14:38:16.900537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1226350) on tqpair=0x11c7d30 00:24:10.062 [2024-12-06 14:38:16.900547] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:10.062 00:24:10.062 14:38:16 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:10.062 [2024-12-06 14:38:16.937007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:10.062 [2024-12-06 14:38:16.937054] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83366 ] 00:24:10.324 [2024-12-06 14:38:17.076538] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:10.324 [2024-12-06 14:38:17.076608] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:10.324 [2024-12-06 14:38:17.076615] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:10.324 [2024-12-06 14:38:17.076625] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:10.324 [2024-12-06 14:38:17.076632] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:10.324 [2024-12-06 14:38:17.076724] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:10.324 [2024-12-06 14:38:17.076766] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1008d30 0 00:24:10.324 [2024-12-06 14:38:17.083461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:10.324 [2024-12-06 14:38:17.083485] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:10.324 [2024-12-06 14:38:17.083507] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:10.324 [2024-12-06 14:38:17.083511] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:10.324 [2024-12-06 14:38:17.083548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.083555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.083559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.324 [2024-12-06 14:38:17.083570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:10.324 [2024-12-06 14:38:17.083599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.324 [2024-12-06 14:38:17.090504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.324 [2024-12-06 14:38:17.090526] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.324 [2024-12-06 14:38:17.090547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.090552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.324 [2024-12-06 14:38:17.090565] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:10.324 [2024-12-06 14:38:17.090572] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:10.324 [2024-12-06 14:38:17.090578] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:10.324 [2024-12-06 14:38:17.090591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.090596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.090600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.324 [2024-12-06 14:38:17.090609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.324 [2024-12-06 14:38:17.090636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.324 [2024-12-06 14:38:17.090705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.324 [2024-12-06 14:38:17.090712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.324 [2024-12-06 14:38:17.090716] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.090720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.324 [2024-12-06 14:38:17.090726] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:10.324 [2024-12-06 14:38:17.090734] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:10.324 [2024-12-06 14:38:17.090741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.090745] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.090749] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.324 [2024-12-06 14:38:17.090756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.324 [2024-12-06 14:38:17.090791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.324 [2024-12-06 14:38:17.091295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.324 [2024-12-06 14:38:17.091311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.324 [2024-12-06 14:38:17.091315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.091320] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.324 [2024-12-06 14:38:17.091327] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:10.324 [2024-12-06 14:38:17.091336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:10.324 [2024-12-06 14:38:17.091344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.091348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.091352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.324 [2024-12-06 14:38:17.091360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.324 [2024-12-06 14:38:17.091391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.324 [2024-12-06 14:38:17.091492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.324 [2024-12-06 14:38:17.091501] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.324 [2024-12-06 14:38:17.091505] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.091509] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.324 [2024-12-06 14:38:17.091516] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:10.324 [2024-12-06 14:38:17.091527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.091532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.091536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.324 [2024-12-06 14:38:17.091544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.324 [2024-12-06 14:38:17.091562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.324 [2024-12-06 14:38:17.092045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.324 [2024-12-06 14:38:17.092061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.324 [2024-12-06 14:38:17.092065] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.324 [2024-12-06 14:38:17.092070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.324 [2024-12-06 14:38:17.092076] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:10.324 [2024-12-06 14:38:17.092081] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:10.324 [2024-12-06 14:38:17.092090] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:10.324 [2024-12-06 14:38:17.092197] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:10.324 [2024-12-06 14:38:17.092201] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:10.325 [2024-12-06 14:38:17.092210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.092214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.092218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.092225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.325 [2024-12-06 14:38:17.092246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.325 [2024-12-06 14:38:17.092484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.325 [2024-12-06 14:38:17.092500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.325 [2024-12-06 14:38:17.092505] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.092509] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.325 [2024-12-06 14:38:17.092516] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:10.325 [2024-12-06 14:38:17.092527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.092532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.092536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.092544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.325 [2024-12-06 14:38:17.092563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.325 [2024-12-06 14:38:17.092988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.325 [2024-12-06 14:38:17.093002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.325 [2024-12-06 14:38:17.093007] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093011] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.325 [2024-12-06 14:38:17.093017] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:10.325 [2024-12-06 14:38:17.093023] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:10.325 [2024-12-06 14:38:17.093031] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:10.325 [2024-12-06 14:38:17.093046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:10.325 [2024-12-06 14:38:17.093056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093065] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.093072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.325 [2024-12-06 14:38:17.093093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.325 [2024-12-06 14:38:17.093430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.325 [2024-12-06 14:38:17.093446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.325 [2024-12-06 14:38:17.093451] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093455] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=4096, cccid=0 00:24:10.325 [2024-12-06 14:38:17.093460] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1066f30) on tqpair(0x1008d30): expected_datao=0, payload_size=4096 00:24:10.325 [2024-12-06 14:38:17.093469] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093473] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093538] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.325 [2024-12-06 14:38:17.093545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.325 [2024-12-06 14:38:17.093549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.325 [2024-12-06 14:38:17.093562] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:10.325 [2024-12-06 14:38:17.093567] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:10.325 [2024-12-06 14:38:17.093572] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:10.325 [2024-12-06 14:38:17.093577] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:10.325 [2024-12-06 14:38:17.093582] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:10.325 [2024-12-06 14:38:17.093587] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:10.325 [2024-12-06 14:38:17.093600] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:10.325 [2024-12-06 14:38:17.093608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093613] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.093616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.093625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.325 [2024-12-06 14:38:17.093674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.325 [2024-12-06 14:38:17.093966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.325 [2024-12-06 14:38:17.093996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.325 [2024-12-06 14:38:17.094001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1066f30) on tqpair=0x1008d30 00:24:10.325 [2024-12-06 14:38:17.094014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.094030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.325 [2024-12-06 14:38:17.094037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094044] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.094050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.325 [2024-12-06 14:38:17.094056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094064] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.094070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.325 [2024-12-06 14:38:17.094076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.094090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.325 [2024-12-06 14:38:17.094095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:10.325 [2024-12-06 14:38:17.094108] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:10.325 [2024-12-06 14:38:17.094115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.094123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.325 [2024-12-06 14:38:17.094130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.325 [2024-12-06 14:38:17.094183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1066f30, cid 0, qid 0 00:24:10.325 [2024-12-06 14:38:17.094191] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067090, cid 1, qid 0 00:24:10.325 [2024-12-06 14:38:17.094196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10671f0, cid 2, qid 0 00:24:10.325 [2024-12-06 14:38:17.094200] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.325 [2024-12-06 14:38:17.094205] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.325 [2024-12-06 14:38:17.098515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.325 [2024-12-06 14:38:17.098537] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.325 [2024-12-06 14:38:17.098542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.325 [2024-12-06 14:38:17.098563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.325 [2024-12-06 14:38:17.098571] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:10.326 [2024-12-06 14:38:17.098577] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.098588] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.098600] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.098609] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.098614] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.098618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.326 [2024-12-06 14:38:17.098627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.326 [2024-12-06 14:38:17.098653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.326 [2024-12-06 14:38:17.098724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.326 [2024-12-06 14:38:17.098732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.326 [2024-12-06 14:38:17.098736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.098740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.326 [2024-12-06 14:38:17.098804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.098815] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.098838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.098843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.098847] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.326 [2024-12-06 14:38:17.098870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.326 [2024-12-06 14:38:17.098890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.326 [2024-12-06 14:38:17.099331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.326 [2024-12-06 14:38:17.099346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.326 [2024-12-06 14:38:17.099351] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099355] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=4096, cccid=4 00:24:10.326 [2024-12-06 14:38:17.099360] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10674b0) on tqpair(0x1008d30): expected_datao=0, payload_size=4096 00:24:10.326 [2024-12-06 14:38:17.099369] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099373] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.326 [2024-12-06 14:38:17.099389] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.326 [2024-12-06 14:38:17.099392] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.326 [2024-12-06 14:38:17.099440] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:10.326 [2024-12-06 14:38:17.099453] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.099465] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.099474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.326 [2024-12-06 14:38:17.099490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.326 [2024-12-06 14:38:17.099513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.326 [2024-12-06 14:38:17.099930] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.326 [2024-12-06 14:38:17.099946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.326 [2024-12-06 14:38:17.099951] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099955] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=4096, cccid=4 00:24:10.326 [2024-12-06 14:38:17.099960] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10674b0) on tqpair(0x1008d30): expected_datao=0, payload_size=4096 00:24:10.326 [2024-12-06 14:38:17.099968] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099988] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.099997] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.326 [2024-12-06 14:38:17.100003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.326 [2024-12-06 14:38:17.100006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.326 [2024-12-06 14:38:17.100027] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.326 [2024-12-06 14:38:17.100063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.326 [2024-12-06 14:38:17.100100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.326 [2024-12-06 14:38:17.100362] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.326 [2024-12-06 14:38:17.100377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.326 [2024-12-06 14:38:17.100382] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100386] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=4096, cccid=4 00:24:10.326 [2024-12-06 14:38:17.100391] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10674b0) on tqpair(0x1008d30): expected_datao=0, payload_size=4096 00:24:10.326 [2024-12-06 14:38:17.100399] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100403] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.326 [2024-12-06 14:38:17.100476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.326 [2024-12-06 14:38:17.100480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100484] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.326 [2024-12-06 14:38:17.100495] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100504] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100523] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100528] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100534] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:10.326 [2024-12-06 14:38:17.100539] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:10.326 [2024-12-06 14:38:17.100544] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:10.326 [2024-12-06 14:38:17.100559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.326 [2024-12-06 14:38:17.100576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.326 [2024-12-06 14:38:17.100584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1008d30) 00:24:10.326 [2024-12-06 14:38:17.100598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.326 [2024-12-06 14:38:17.100626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.326 [2024-12-06 14:38:17.100634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067610, cid 5, qid 0 00:24:10.326 [2024-12-06 14:38:17.100969] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.326 [2024-12-06 14:38:17.100986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.326 [2024-12-06 14:38:17.100991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.100995] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.326 [2024-12-06 14:38:17.101003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.326 [2024-12-06 14:38:17.101009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.326 [2024-12-06 14:38:17.101013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.326 [2024-12-06 14:38:17.101017] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067610) on tqpair=0x1008d30 00:24:10.327 [2024-12-06 14:38:17.101029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101037] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.101064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067610, cid 5, qid 0 00:24:10.327 [2024-12-06 14:38:17.101231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.327 [2024-12-06 14:38:17.101238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.327 [2024-12-06 14:38:17.101242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067610) on tqpair=0x1008d30 00:24:10.327 [2024-12-06 14:38:17.101257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.101290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067610, cid 5, qid 0 00:24:10.327 [2024-12-06 14:38:17.101720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.327 [2024-12-06 14:38:17.101738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.327 [2024-12-06 14:38:17.101743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067610) on tqpair=0x1008d30 00:24:10.327 [2024-12-06 14:38:17.101760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.101799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067610, cid 5, qid 0 00:24:10.327 [2024-12-06 14:38:17.101862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.327 [2024-12-06 14:38:17.101869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.327 [2024-12-06 14:38:17.101873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067610) on tqpair=0x1008d30 00:24:10.327 [2024-12-06 14:38:17.101892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.101917] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.101939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.101962] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.101986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1008d30) 00:24:10.327 [2024-12-06 14:38:17.101992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.327 [2024-12-06 14:38:17.102012] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067610, cid 5, qid 0 00:24:10.327 [2024-12-06 14:38:17.102019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10674b0, cid 4, qid 0 00:24:10.327 [2024-12-06 14:38:17.102024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067770, cid 6, qid 0 00:24:10.327 [2024-12-06 14:38:17.102029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10678d0, cid 7, qid 0 00:24:10.327 [2024-12-06 14:38:17.105521] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.327 [2024-12-06 14:38:17.105539] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.327 [2024-12-06 14:38:17.105561] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105565] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=8192, cccid=5 00:24:10.327 [2024-12-06 14:38:17.105570] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1067610) on tqpair(0x1008d30): expected_datao=0, payload_size=8192 00:24:10.327 [2024-12-06 14:38:17.105579] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105584] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.327 [2024-12-06 14:38:17.105597] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.327 [2024-12-06 14:38:17.105600] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105604] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=512, cccid=4 00:24:10.327 [2024-12-06 14:38:17.105609] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10674b0) on tqpair(0x1008d30): expected_datao=0, payload_size=512 00:24:10.327 [2024-12-06 14:38:17.105617] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105621] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.327 [2024-12-06 14:38:17.105633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.327 [2024-12-06 14:38:17.105636] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105649] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=512, cccid=6 00:24:10.327 [2024-12-06 14:38:17.105654] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1067770) on tqpair(0x1008d30): expected_datao=0, payload_size=512 00:24:10.327 [2024-12-06 14:38:17.105662] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105666] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.327 [2024-12-06 14:38:17.105677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.327 [2024-12-06 14:38:17.105681] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.327 [2024-12-06 14:38:17.105685] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1008d30): datao=0, datal=4096, cccid=7 00:24:10.327 [2024-12-06 14:38:17.105690] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10678d0) on tqpair(0x1008d30): expected_datao=0, payload_size=4096 00:24:10.327 ===================================================== 00:24:10.327 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.327 ===================================================== 00:24:10.327 Controller Capabilities/Features 00:24:10.327 ================================ 00:24:10.327 Vendor ID: 8086 00:24:10.327 Subsystem Vendor ID: 8086 00:24:10.327 Serial Number: SPDK00000000000001 00:24:10.327 Model Number: SPDK bdev Controller 00:24:10.327 Firmware Version: 24.01.1 00:24:10.327 Recommended Arb Burst: 6 00:24:10.327 IEEE OUI Identifier: e4 d2 5c 00:24:10.327 Multi-path I/O 00:24:10.327 May have multiple subsystem ports: Yes 00:24:10.327 May have multiple controllers: Yes 00:24:10.327 Associated with SR-IOV VF: No 00:24:10.327 Max Data Transfer Size: 131072 00:24:10.327 Max Number of Namespaces: 32 00:24:10.327 Max Number of I/O Queues: 127 00:24:10.327 NVMe Specification Version (VS): 1.3 00:24:10.327 NVMe Specification Version (Identify): 1.3 00:24:10.327 Maximum Queue Entries: 128 00:24:10.327 Contiguous Queues Required: Yes 00:24:10.327 Arbitration Mechanisms Supported 00:24:10.327 Weighted Round Robin: Not Supported 00:24:10.327 Vendor Specific: Not Supported 00:24:10.327 Reset Timeout: 15000 ms 00:24:10.327 Doorbell Stride: 4 bytes 00:24:10.327 NVM Subsystem Reset: Not Supported 00:24:10.327 Command Sets Supported 00:24:10.327 NVM Command Set: Supported 00:24:10.327 Boot Partition: Not Supported 00:24:10.327 Memory Page Size Minimum: 4096 bytes 00:24:10.328 Memory Page Size Maximum: 4096 bytes 00:24:10.328 Persistent Memory Region: Not Supported 00:24:10.328 Optional Asynchronous Events Supported 00:24:10.328 Namespace Attribute Notices: Supported 00:24:10.328 Firmware Activation Notices: Not Supported 00:24:10.328 ANA Change Notices: Not Supported 00:24:10.328 PLE Aggregate Log Change Notices: Not Supported 00:24:10.328 LBA Status Info Alert Notices: Not Supported 00:24:10.328 EGE Aggregate Log Change Notices: Not Supported 00:24:10.328 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.328 Zone Descriptor Change Notices: Not Supported 00:24:10.328 Discovery Log Change Notices: Not Supported 00:24:10.328 Controller Attributes 00:24:10.328 128-bit Host Identifier: Supported 00:24:10.328 Non-Operational Permissive Mode: Not Supported 00:24:10.328 NVM Sets: Not Supported 00:24:10.328 Read Recovery Levels: Not Supported 00:24:10.328 Endurance Groups: Not Supported 00:24:10.328 Predictable Latency Mode: Not Supported 00:24:10.328 Traffic Based Keep ALive: Not Supported 00:24:10.328 Namespace Granularity: Not Supported 00:24:10.328 SQ Associations: Not Supported 00:24:10.328 UUID List: Not Supported 00:24:10.328 Multi-Domain Subsystem: Not Supported 00:24:10.328 Fixed Capacity Management: Not Supported 00:24:10.328 Variable Capacity Management: Not Supported 00:24:10.328 Delete Endurance Group: Not Supported 00:24:10.328 Delete NVM Set: Not Supported 00:24:10.328 Extended LBA Formats Supported: Not Supported 00:24:10.328 Flexible Data Placement Supported: Not Supported 00:24:10.328 00:24:10.328 Controller Memory Buffer Support 00:24:10.328 ================================ 00:24:10.328 Supported: No 00:24:10.328 00:24:10.328 Persistent Memory Region Support 00:24:10.328 ================================ 00:24:10.328 Supported: No 00:24:10.328 00:24:10.328 Admin Command Set Attributes 00:24:10.328 ============================ 00:24:10.328 Security Send/Receive: Not Supported 00:24:10.328 Format NVM: Not Supported 00:24:10.328 Firmware Activate/Download: Not Supported 00:24:10.328 Namespace Management: Not Supported 00:24:10.328 Device Self-Test: Not Supported 00:24:10.328 Directives: Not Supported 00:24:10.328 NVMe-MI: Not Supported 00:24:10.328 Virtualization Management: Not Supported 00:24:10.328 Doorbell Buffer Config: Not Supported 00:24:10.328 Get LBA Status Capability: Not Supported 00:24:10.328 Command & Feature Lockdown Capability: Not Supported 00:24:10.328 Abort Command Limit: 4 00:24:10.328 Async Event Request Limit: 4 00:24:10.328 Number of Firmware Slots: N/A 00:24:10.328 Firmware Slot 1 Read-Only: N/A 00:24:10.328 Firmware Activation Without Reset: [2024-12-06 14:38:17.105697] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.328 [2024-12-06 14:38:17.105701] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.328 [2024-12-06 14:38:17.105707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.328 [2024-12-06 14:38:17.105713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.328 [2024-12-06 14:38:17.105717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.328 [2024-12-06 14:38:17.105721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067610) on tqpair=0x1008d30 00:24:10.328 [2024-12-06 14:38:17.105743] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.328 [2024-12-06 14:38:17.105751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.328 [2024-12-06 14:38:17.105755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.328 [2024-12-06 14:38:17.105759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10674b0) on tqpair=0x1008d30 00:24:10.328 [2024-12-06 14:38:17.105770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.328 [2024-12-06 14:38:17.105776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.328 [2024-12-06 14:38:17.105780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.328 [2024-12-06 14:38:17.105784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067770) on tqpair=0x1008d30 00:24:10.328 [2024-12-06 14:38:17.105793] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.328 [2024-12-06 14:38:17.105799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.328 [2024-12-06 14:38:17.105803] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.328 [2024-12-06 14:38:17.105807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10678d0) on tqpair=0x1008d30 00:24:10.328 N/A 00:24:10.328 Multiple Update Detection Support: N/A 00:24:10.328 Firmware Update Granularity: No Information Provided 00:24:10.328 Per-Namespace SMART Log: No 00:24:10.328 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.328 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:10.328 Command Effects Log Page: Supported 00:24:10.328 Get Log Page Extended Data: Supported 00:24:10.328 Telemetry Log Pages: Not Supported 00:24:10.328 Persistent Event Log Pages: Not Supported 00:24:10.328 Supported Log Pages Log Page: May Support 00:24:10.328 Commands Supported & Effects Log Page: Not Supported 00:24:10.328 Feature Identifiers & Effects Log Page:May Support 00:24:10.328 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.328 Data Area 4 for Telemetry Log: Not Supported 00:24:10.328 Error Log Page Entries Supported: 128 00:24:10.328 Keep Alive: Supported 00:24:10.328 Keep Alive Granularity: 10000 ms 00:24:10.328 00:24:10.328 NVM Command Set Attributes 00:24:10.328 ========================== 00:24:10.328 Submission Queue Entry Size 00:24:10.328 Max: 64 00:24:10.328 Min: 64 00:24:10.328 Completion Queue Entry Size 00:24:10.328 Max: 16 00:24:10.328 Min: 16 00:24:10.328 Number of Namespaces: 32 00:24:10.328 Compare Command: Supported 00:24:10.328 Write Uncorrectable Command: Not Supported 00:24:10.328 Dataset Management Command: Supported 00:24:10.328 Write Zeroes Command: Supported 00:24:10.328 Set Features Save Field: Not Supported 00:24:10.328 Reservations: Supported 00:24:10.328 Timestamp: Not Supported 00:24:10.328 Copy: Supported 00:24:10.328 Volatile Write Cache: Present 00:24:10.328 Atomic Write Unit (Normal): 1 00:24:10.328 Atomic Write Unit (PFail): 1 00:24:10.328 Atomic Compare & Write Unit: 1 00:24:10.328 Fused Compare & Write: Supported 00:24:10.328 Scatter-Gather List 00:24:10.328 SGL Command Set: Supported 00:24:10.328 SGL Keyed: Supported 00:24:10.328 SGL Bit Bucket Descriptor: Not Supported 00:24:10.328 SGL Metadata Pointer: Not Supported 00:24:10.328 Oversized SGL: Not Supported 00:24:10.328 SGL Metadata Address: Not Supported 00:24:10.328 SGL Offset: Supported 00:24:10.328 Transport SGL Data Block: Not Supported 00:24:10.328 Replay Protected Memory Block: Not Supported 00:24:10.328 00:24:10.328 Firmware Slot Information 00:24:10.328 ========================= 00:24:10.328 Active slot: 1 00:24:10.328 Slot 1 Firmware Revision: 24.01.1 00:24:10.328 00:24:10.328 00:24:10.328 Commands Supported and Effects 00:24:10.328 ============================== 00:24:10.328 Admin Commands 00:24:10.328 -------------- 00:24:10.328 Get Log Page (02h): Supported 00:24:10.328 Identify (06h): Supported 00:24:10.328 Abort (08h): Supported 00:24:10.328 Set Features (09h): Supported 00:24:10.328 Get Features (0Ah): Supported 00:24:10.328 Asynchronous Event Request (0Ch): Supported 00:24:10.328 Keep Alive (18h): Supported 00:24:10.328 I/O Commands 00:24:10.328 ------------ 00:24:10.328 Flush (00h): Supported LBA-Change 00:24:10.328 Write (01h): Supported LBA-Change 00:24:10.328 Read (02h): Supported 00:24:10.328 Compare (05h): Supported 00:24:10.328 Write Zeroes (08h): Supported LBA-Change 00:24:10.328 Dataset Management (09h): Supported LBA-Change 00:24:10.328 Copy (19h): Supported LBA-Change 00:24:10.328 Unknown (79h): Supported LBA-Change 00:24:10.328 Unknown (7Ah): Supported 00:24:10.328 00:24:10.328 Error Log 00:24:10.328 ========= 00:24:10.328 00:24:10.328 Arbitration 00:24:10.328 =========== 00:24:10.328 Arbitration Burst: 1 00:24:10.328 00:24:10.328 Power Management 00:24:10.328 ================ 00:24:10.328 Number of Power States: 1 00:24:10.328 Current Power State: Power State #0 00:24:10.328 Power State #0: 00:24:10.328 Max Power: 0.00 W 00:24:10.328 Non-Operational State: Operational 00:24:10.328 Entry Latency: Not Reported 00:24:10.328 Exit Latency: Not Reported 00:24:10.328 Relative Read Throughput: 0 00:24:10.328 Relative Read Latency: 0 00:24:10.328 Relative Write Throughput: 0 00:24:10.328 Relative Write Latency: 0 00:24:10.328 Idle Power: Not Reported 00:24:10.328 Active Power: Not Reported 00:24:10.328 Non-Operational Permissive Mode: Not Supported 00:24:10.328 00:24:10.328 Health Information 00:24:10.328 ================== 00:24:10.328 Critical Warnings: 00:24:10.328 Available Spare Space: OK 00:24:10.328 Temperature: OK 00:24:10.328 Device Reliability: OK 00:24:10.328 Read Only: No 00:24:10.328 Volatile Memory Backup: OK 00:24:10.328 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:10.328 Temperature Threshold: [2024-12-06 14:38:17.105923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.105931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.105935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.105944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.105987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10678d0, cid 7, qid 0 00:24:10.329 [2024-12-06 14:38:17.106696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.106714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.106718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.106723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10678d0) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.106759] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:10.329 [2024-12-06 14:38:17.106773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.329 [2024-12-06 14:38:17.106780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.329 [2024-12-06 14:38:17.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.329 [2024-12-06 14:38:17.106793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.329 [2024-12-06 14:38:17.106802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.106806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.106810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.106817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.106841] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.107020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.107027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.107031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.107044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.107060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.107082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.107525] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.107540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.107561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.107572] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:10.329 [2024-12-06 14:38:17.107592] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:10.329 [2024-12-06 14:38:17.107603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107612] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.107620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.107641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.107701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.107708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.107712] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.107727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.107736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.107743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.107760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.108089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.108103] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.108108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.108124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.108140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.108159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.108226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.108233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.108236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.108252] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108256] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108260] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.108267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.108284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.108537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.108552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.108557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108562] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.108574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.108591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.108611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.108676] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.108683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.108687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.329 [2024-12-06 14:38:17.108702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108707] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.329 [2024-12-06 14:38:17.108711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.329 [2024-12-06 14:38:17.108719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.329 [2024-12-06 14:38:17.108736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.329 [2024-12-06 14:38:17.109067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.329 [2024-12-06 14:38:17.109081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.329 [2024-12-06 14:38:17.109085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.109089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.330 [2024-12-06 14:38:17.109102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.109106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.109110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.330 [2024-12-06 14:38:17.109118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.330 [2024-12-06 14:38:17.109136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.330 [2024-12-06 14:38:17.109195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.330 [2024-12-06 14:38:17.109201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.330 [2024-12-06 14:38:17.109205] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.109209] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.330 [2024-12-06 14:38:17.109220] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.109225] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.109228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.330 [2024-12-06 14:38:17.109235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.330 [2024-12-06 14:38:17.109252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.330 [2024-12-06 14:38:17.113454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.330 [2024-12-06 14:38:17.113473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.330 [2024-12-06 14:38:17.113494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.113498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.330 [2024-12-06 14:38:17.113512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.113517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.113521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1008d30) 00:24:10.330 [2024-12-06 14:38:17.113529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.330 [2024-12-06 14:38:17.113553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1067350, cid 3, qid 0 00:24:10.330 [2024-12-06 14:38:17.113618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.330 [2024-12-06 14:38:17.113624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.330 [2024-12-06 14:38:17.113628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.330 [2024-12-06 14:38:17.113632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1067350) on tqpair=0x1008d30 00:24:10.330 [2024-12-06 14:38:17.113648] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:10.330 0 Kelvin (-273 Celsius) 00:24:10.330 Available Spare: 0% 00:24:10.330 Available Spare Threshold: 0% 00:24:10.330 Life Percentage Used: 0% 00:24:10.330 Data Units Read: 0 00:24:10.330 Data Units Written: 0 00:24:10.330 Host Read Commands: 0 00:24:10.330 Host Write Commands: 0 00:24:10.330 Controller Busy Time: 0 minutes 00:24:10.330 Power Cycles: 0 00:24:10.330 Power On Hours: 0 hours 00:24:10.330 Unsafe Shutdowns: 0 00:24:10.330 Unrecoverable Media Errors: 0 00:24:10.330 Lifetime Error Log Entries: 0 00:24:10.330 Warning Temperature Time: 0 minutes 00:24:10.330 Critical Temperature Time: 0 minutes 00:24:10.330 00:24:10.330 Number of Queues 00:24:10.330 ================ 00:24:10.330 Number of I/O Submission Queues: 127 00:24:10.330 Number of I/O Completion Queues: 127 00:24:10.330 00:24:10.330 Active Namespaces 00:24:10.330 ================= 00:24:10.330 Namespace ID:1 00:24:10.330 Error Recovery Timeout: Unlimited 00:24:10.330 Command Set Identifier: NVM (00h) 00:24:10.330 Deallocate: Supported 00:24:10.330 Deallocated/Unwritten Error: Not Supported 00:24:10.330 Deallocated Read Value: Unknown 00:24:10.330 Deallocate in Write Zeroes: Not Supported 00:24:10.330 Deallocated Guard Field: 0xFFFF 00:24:10.330 Flush: Supported 00:24:10.330 Reservation: Supported 00:24:10.330 Namespace Sharing Capabilities: Multiple Controllers 00:24:10.330 Size (in LBAs): 131072 (0GiB) 00:24:10.330 Capacity (in LBAs): 131072 (0GiB) 00:24:10.330 Utilization (in LBAs): 131072 (0GiB) 00:24:10.330 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:10.330 EUI64: ABCDEF0123456789 00:24:10.330 UUID: 06eb157b-eba8-40dd-9006-5963d5736f20 00:24:10.330 Thin Provisioning: Not Supported 00:24:10.330 Per-NS Atomic Units: Yes 00:24:10.330 Atomic Boundary Size (Normal): 0 00:24:10.330 Atomic Boundary Size (PFail): 0 00:24:10.330 Atomic Boundary Offset: 0 00:24:10.330 Maximum Single Source Range Length: 65535 00:24:10.330 Maximum Copy Length: 65535 00:24:10.330 Maximum Source Range Count: 1 00:24:10.330 NGUID/EUI64 Never Reused: No 00:24:10.330 Namespace Write Protected: No 00:24:10.330 Number of LBA Formats: 1 00:24:10.330 Current LBA Format: LBA Format #00 00:24:10.330 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:10.330 00:24:10.330 14:38:17 -- host/identify.sh@51 -- # sync 00:24:10.330 14:38:17 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.330 14:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.330 14:38:17 -- common/autotest_common.sh@10 -- # set +x 00:24:10.330 14:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.330 14:38:17 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:10.330 14:38:17 -- host/identify.sh@56 -- # nvmftestfini 00:24:10.330 14:38:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:10.330 14:38:17 -- nvmf/common.sh@116 -- # sync 00:24:10.330 14:38:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:10.330 14:38:17 -- nvmf/common.sh@119 -- # set +e 00:24:10.330 14:38:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:10.330 14:38:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:10.330 rmmod nvme_tcp 00:24:10.330 rmmod nvme_fabrics 00:24:10.330 rmmod nvme_keyring 00:24:10.330 14:38:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:10.330 14:38:17 -- nvmf/common.sh@123 -- # set -e 00:24:10.330 14:38:17 -- nvmf/common.sh@124 -- # return 0 00:24:10.330 14:38:17 -- nvmf/common.sh@477 -- # '[' -n 83311 ']' 00:24:10.330 14:38:17 -- nvmf/common.sh@478 -- # killprocess 83311 00:24:10.330 14:38:17 -- common/autotest_common.sh@936 -- # '[' -z 83311 ']' 00:24:10.330 14:38:17 -- common/autotest_common.sh@940 -- # kill -0 83311 00:24:10.330 14:38:17 -- common/autotest_common.sh@941 -- # uname 00:24:10.330 14:38:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:10.330 14:38:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83311 00:24:10.330 killing process with pid 83311 00:24:10.330 14:38:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:10.330 14:38:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:10.330 14:38:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83311' 00:24:10.330 14:38:17 -- common/autotest_common.sh@955 -- # kill 83311 00:24:10.330 [2024-12-06 14:38:17.270892] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:10.330 14:38:17 -- common/autotest_common.sh@960 -- # wait 83311 00:24:10.589 14:38:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:10.589 14:38:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:10.589 14:38:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:10.589 14:38:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.589 14:38:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:10.847 14:38:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.847 14:38:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.847 14:38:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.847 14:38:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:10.847 ************************************ 00:24:10.847 END TEST nvmf_identify 00:24:10.847 ************************************ 00:24:10.847 00:24:10.847 real 0m2.803s 00:24:10.847 user 0m7.655s 00:24:10.847 sys 0m0.702s 00:24:10.847 14:38:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:10.847 14:38:17 -- common/autotest_common.sh@10 -- # set +x 00:24:10.847 14:38:17 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:10.847 14:38:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:10.847 14:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.847 14:38:17 -- common/autotest_common.sh@10 -- # set +x 00:24:10.847 ************************************ 00:24:10.847 START TEST nvmf_perf 00:24:10.847 ************************************ 00:24:10.847 14:38:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:10.847 * Looking for test storage... 00:24:10.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:10.847 14:38:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:10.847 14:38:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:10.847 14:38:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:11.107 14:38:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:11.107 14:38:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:11.107 14:38:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:11.107 14:38:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:11.107 14:38:17 -- scripts/common.sh@335 -- # IFS=.-: 00:24:11.107 14:38:17 -- scripts/common.sh@335 -- # read -ra ver1 00:24:11.107 14:38:17 -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.107 14:38:17 -- scripts/common.sh@336 -- # read -ra ver2 00:24:11.107 14:38:17 -- scripts/common.sh@337 -- # local 'op=<' 00:24:11.107 14:38:17 -- scripts/common.sh@339 -- # ver1_l=2 00:24:11.107 14:38:17 -- scripts/common.sh@340 -- # ver2_l=1 00:24:11.107 14:38:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:11.107 14:38:17 -- scripts/common.sh@343 -- # case "$op" in 00:24:11.107 14:38:17 -- scripts/common.sh@344 -- # : 1 00:24:11.107 14:38:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:11.107 14:38:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.107 14:38:17 -- scripts/common.sh@364 -- # decimal 1 00:24:11.107 14:38:17 -- scripts/common.sh@352 -- # local d=1 00:24:11.107 14:38:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.107 14:38:17 -- scripts/common.sh@354 -- # echo 1 00:24:11.107 14:38:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:11.107 14:38:17 -- scripts/common.sh@365 -- # decimal 2 00:24:11.107 14:38:17 -- scripts/common.sh@352 -- # local d=2 00:24:11.107 14:38:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.107 14:38:17 -- scripts/common.sh@354 -- # echo 2 00:24:11.107 14:38:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:11.107 14:38:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:11.107 14:38:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:11.107 14:38:17 -- scripts/common.sh@367 -- # return 0 00:24:11.107 14:38:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.107 14:38:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.107 --rc genhtml_branch_coverage=1 00:24:11.107 --rc genhtml_function_coverage=1 00:24:11.107 --rc genhtml_legend=1 00:24:11.107 --rc geninfo_all_blocks=1 00:24:11.107 --rc geninfo_unexecuted_blocks=1 00:24:11.107 00:24:11.107 ' 00:24:11.107 14:38:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.107 --rc genhtml_branch_coverage=1 00:24:11.107 --rc genhtml_function_coverage=1 00:24:11.107 --rc genhtml_legend=1 00:24:11.107 --rc geninfo_all_blocks=1 00:24:11.107 --rc geninfo_unexecuted_blocks=1 00:24:11.107 00:24:11.107 ' 00:24:11.107 14:38:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.107 --rc genhtml_branch_coverage=1 00:24:11.107 --rc genhtml_function_coverage=1 00:24:11.107 --rc genhtml_legend=1 00:24:11.107 --rc geninfo_all_blocks=1 00:24:11.107 --rc geninfo_unexecuted_blocks=1 00:24:11.107 00:24:11.107 ' 00:24:11.107 14:38:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.107 --rc genhtml_branch_coverage=1 00:24:11.107 --rc genhtml_function_coverage=1 00:24:11.107 --rc genhtml_legend=1 00:24:11.107 --rc geninfo_all_blocks=1 00:24:11.107 --rc geninfo_unexecuted_blocks=1 00:24:11.107 00:24:11.107 ' 00:24:11.107 14:38:17 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:11.107 14:38:17 -- nvmf/common.sh@7 -- # uname -s 00:24:11.107 14:38:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.107 14:38:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.107 14:38:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.107 14:38:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.107 14:38:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.107 14:38:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.107 14:38:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.107 14:38:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.107 14:38:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.107 14:38:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.107 14:38:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:11.107 14:38:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:24:11.107 14:38:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.107 14:38:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.107 14:38:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:11.107 14:38:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:11.107 14:38:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.107 14:38:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.107 14:38:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.107 14:38:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.107 14:38:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.107 14:38:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.107 14:38:17 -- paths/export.sh@5 -- # export PATH 00:24:11.107 14:38:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.107 14:38:17 -- nvmf/common.sh@46 -- # : 0 00:24:11.107 14:38:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:11.107 14:38:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:11.107 14:38:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:11.107 14:38:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.107 14:38:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.107 14:38:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:11.107 14:38:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:11.107 14:38:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:11.107 14:38:17 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:11.107 14:38:17 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:11.107 14:38:17 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.107 14:38:17 -- host/perf.sh@17 -- # nvmftestinit 00:24:11.107 14:38:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:11.107 14:38:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.107 14:38:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:11.107 14:38:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:11.107 14:38:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:11.107 14:38:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.107 14:38:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.107 14:38:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.107 14:38:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:11.107 14:38:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:11.107 14:38:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:11.107 14:38:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:11.107 14:38:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:11.107 14:38:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:11.107 14:38:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.107 14:38:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.107 14:38:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:11.107 14:38:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:11.107 14:38:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:11.107 14:38:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:11.107 14:38:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:11.107 14:38:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.107 14:38:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:11.107 14:38:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:11.107 14:38:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:11.107 14:38:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:11.107 14:38:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:11.107 14:38:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:11.107 Cannot find device "nvmf_tgt_br" 00:24:11.107 14:38:17 -- nvmf/common.sh@154 -- # true 00:24:11.107 14:38:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:11.107 Cannot find device "nvmf_tgt_br2" 00:24:11.107 14:38:17 -- nvmf/common.sh@155 -- # true 00:24:11.107 14:38:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:11.108 14:38:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:11.108 Cannot find device "nvmf_tgt_br" 00:24:11.108 14:38:17 -- nvmf/common.sh@157 -- # true 00:24:11.108 14:38:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:11.108 Cannot find device "nvmf_tgt_br2" 00:24:11.108 14:38:17 -- nvmf/common.sh@158 -- # true 00:24:11.108 14:38:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:11.108 14:38:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:11.108 14:38:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:11.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:11.108 14:38:17 -- nvmf/common.sh@161 -- # true 00:24:11.108 14:38:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:11.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:11.108 14:38:17 -- nvmf/common.sh@162 -- # true 00:24:11.108 14:38:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:11.108 14:38:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:11.108 14:38:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:11.108 14:38:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:11.108 14:38:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:11.108 14:38:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:11.367 14:38:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:11.367 14:38:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:11.367 14:38:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:11.367 14:38:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:11.367 14:38:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:11.367 14:38:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:11.367 14:38:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:11.367 14:38:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:11.367 14:38:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:11.367 14:38:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:11.367 14:38:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:11.367 14:38:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:11.367 14:38:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:11.367 14:38:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:11.367 14:38:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:11.367 14:38:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:11.367 14:38:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:11.367 14:38:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:11.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:24:11.367 00:24:11.367 --- 10.0.0.2 ping statistics --- 00:24:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.367 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:11.367 14:38:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:11.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:11.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:24:11.367 00:24:11.367 --- 10.0.0.3 ping statistics --- 00:24:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.367 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:11.367 14:38:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:11.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:24:11.367 00:24:11.367 --- 10.0.0.1 ping statistics --- 00:24:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.367 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:11.367 14:38:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.367 14:38:18 -- nvmf/common.sh@421 -- # return 0 00:24:11.367 14:38:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:11.367 14:38:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.367 14:38:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:11.367 14:38:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:11.367 14:38:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.367 14:38:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:11.367 14:38:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:11.367 14:38:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:11.367 14:38:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:11.367 14:38:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.367 14:38:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.367 14:38:18 -- nvmf/common.sh@469 -- # nvmfpid=83542 00:24:11.367 14:38:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:11.367 14:38:18 -- nvmf/common.sh@470 -- # waitforlisten 83542 00:24:11.367 14:38:18 -- common/autotest_common.sh@829 -- # '[' -z 83542 ']' 00:24:11.367 14:38:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.367 14:38:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.367 14:38:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.367 14:38:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.367 14:38:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.367 [2024-12-06 14:38:18.276791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:11.367 [2024-12-06 14:38:18.277104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.626 [2024-12-06 14:38:18.415671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.626 [2024-12-06 14:38:18.514545] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:11.626 [2024-12-06 14:38:18.514827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.626 [2024-12-06 14:38:18.515216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.626 [2024-12-06 14:38:18.515355] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.626 [2024-12-06 14:38:18.515698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.626 [2024-12-06 14:38:18.515759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.626 [2024-12-06 14:38:18.515821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.626 [2024-12-06 14:38:18.515833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.562 14:38:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.562 14:38:19 -- common/autotest_common.sh@862 -- # return 0 00:24:12.562 14:38:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:12.562 14:38:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.562 14:38:19 -- common/autotest_common.sh@10 -- # set +x 00:24:12.562 14:38:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.562 14:38:19 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:12.562 14:38:19 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:24:12.821 14:38:19 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:24:12.821 14:38:19 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:13.389 14:38:20 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:24:13.389 14:38:20 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:13.389 14:38:20 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:13.389 14:38:20 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:24:13.389 14:38:20 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:13.389 14:38:20 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:13.389 14:38:20 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.648 [2024-12-06 14:38:20.526653] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.648 14:38:20 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.907 14:38:20 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:13.907 14:38:20 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.167 14:38:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:14.167 14:38:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:14.425 14:38:21 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.683 [2024-12-06 14:38:21.620204] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.683 14:38:21 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:14.942 14:38:21 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:24:14.942 14:38:21 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:24:14.942 14:38:21 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:14.942 14:38:21 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:24:16.316 Initializing NVMe Controllers 00:24:16.316 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:24:16.316 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:24:16.316 Initialization complete. Launching workers. 00:24:16.316 ======================================================== 00:24:16.316 Latency(us) 00:24:16.316 Device Information : IOPS MiB/s Average min max 00:24:16.316 PCIE (0000:00:06.0) NSID 1 from core 0: 21920.00 85.62 1459.85 363.94 8156.10 00:24:16.316 ======================================================== 00:24:16.316 Total : 21920.00 85.62 1459.85 363.94 8156.10 00:24:16.316 00:24:16.316 14:38:22 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:17.706 Initializing NVMe Controllers 00:24:17.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.706 Initialization complete. Launching workers. 00:24:17.706 ======================================================== 00:24:17.706 Latency(us) 00:24:17.706 Device Information : IOPS MiB/s Average min max 00:24:17.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3311.98 12.94 301.64 105.77 7234.09 00:24:17.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8186.44 5984.07 14986.72 00:24:17.706 ======================================================== 00:24:17.706 Total : 3434.98 13.42 583.98 105.77 14986.72 00:24:17.706 00:24:17.706 14:38:24 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:18.648 Initializing NVMe Controllers 00:24:18.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:18.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:18.648 Initialization complete. Launching workers. 00:24:18.648 ======================================================== 00:24:18.648 Latency(us) 00:24:18.648 Device Information : IOPS MiB/s Average min max 00:24:18.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8518.99 33.28 3766.87 729.32 10183.18 00:24:18.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2693.00 10.52 11966.36 6000.61 22487.50 00:24:18.648 ======================================================== 00:24:18.648 Total : 11211.99 43.80 5736.29 729.32 22487.50 00:24:18.648 00:24:18.648 14:38:25 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:18.648 14:38:25 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.178 [2024-12-06 14:38:28.050454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55c50 is same with the state(5) to be set 00:24:21.436 Initializing NVMe Controllers 00:24:21.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.436 Controller IO queue size 128, less than required. 00:24:21.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:21.436 Controller IO queue size 128, less than required. 00:24:21.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:21.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:21.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:21.436 Initialization complete. Launching workers. 00:24:21.436 ======================================================== 00:24:21.436 Latency(us) 00:24:21.436 Device Information : IOPS MiB/s Average min max 00:24:21.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1186.36 296.59 110781.63 71002.06 175108.74 00:24:21.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.43 147.61 228066.03 95248.69 335604.56 00:24:21.436 ======================================================== 00:24:21.436 Total : 1776.79 444.20 149755.43 71002.06 335604.56 00:24:21.436 00:24:21.436 14:38:28 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:21.436 No valid NVMe controllers or AIO or URING devices found 00:24:21.436 Initializing NVMe Controllers 00:24:21.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.436 Controller IO queue size 128, less than required. 00:24:21.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:21.436 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:21.436 Controller IO queue size 128, less than required. 00:24:21.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:21.436 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:21.436 WARNING: Some requested NVMe devices were skipped 00:24:21.436 14:38:28 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:23.989 Initializing NVMe Controllers 00:24:23.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.989 Controller IO queue size 128, less than required. 00:24:23.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.989 Controller IO queue size 128, less than required. 00:24:23.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.989 Initialization complete. Launching workers. 00:24:23.989 00:24:23.989 ==================== 00:24:23.989 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:23.989 TCP transport: 00:24:23.989 polls: 7339 00:24:23.989 idle_polls: 4544 00:24:23.989 sock_completions: 2795 00:24:23.989 nvme_completions: 4294 00:24:23.989 submitted_requests: 6658 00:24:23.989 queued_requests: 1 00:24:23.989 00:24:23.989 ==================== 00:24:23.989 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:23.989 TCP transport: 00:24:23.989 polls: 9787 00:24:23.989 idle_polls: 7014 00:24:23.989 sock_completions: 2773 00:24:23.989 nvme_completions: 5586 00:24:23.989 submitted_requests: 8642 00:24:23.989 queued_requests: 1 00:24:23.989 ======================================================== 00:24:23.989 Latency(us) 00:24:23.989 Device Information : IOPS MiB/s Average min max 00:24:23.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1136.83 284.21 115576.84 82232.18 180529.78 00:24:23.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1459.78 364.94 88068.66 47692.10 123243.53 00:24:23.990 ======================================================== 00:24:23.990 Total : 2596.60 649.15 100112.09 47692.10 180529.78 00:24:23.990 00:24:23.990 14:38:30 -- host/perf.sh@66 -- # sync 00:24:23.990 14:38:30 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.554 14:38:31 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:24.554 14:38:31 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:24:24.554 14:38:31 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:24.812 14:38:31 -- host/perf.sh@72 -- # ls_guid=df52e702-161d-4ce8-b526-9c83d898e75a 00:24:24.812 14:38:31 -- host/perf.sh@73 -- # get_lvs_free_mb df52e702-161d-4ce8-b526-9c83d898e75a 00:24:24.812 14:38:31 -- common/autotest_common.sh@1353 -- # local lvs_uuid=df52e702-161d-4ce8-b526-9c83d898e75a 00:24:24.812 14:38:31 -- common/autotest_common.sh@1354 -- # local lvs_info 00:24:24.812 14:38:31 -- common/autotest_common.sh@1355 -- # local fc 00:24:24.812 14:38:31 -- common/autotest_common.sh@1356 -- # local cs 00:24:24.812 14:38:31 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:25.070 14:38:31 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:24:25.070 { 00:24:25.070 "base_bdev": "Nvme0n1", 00:24:25.070 "block_size": 4096, 00:24:25.070 "cluster_size": 4194304, 00:24:25.070 "free_clusters": 1278, 00:24:25.070 "name": "lvs_0", 00:24:25.070 "total_data_clusters": 1278, 00:24:25.070 "uuid": "df52e702-161d-4ce8-b526-9c83d898e75a" 00:24:25.070 } 00:24:25.070 ]' 00:24:25.070 14:38:31 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="df52e702-161d-4ce8-b526-9c83d898e75a") .free_clusters' 00:24:25.070 14:38:31 -- common/autotest_common.sh@1358 -- # fc=1278 00:24:25.070 14:38:31 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="df52e702-161d-4ce8-b526-9c83d898e75a") .cluster_size' 00:24:25.070 5112 00:24:25.070 14:38:31 -- common/autotest_common.sh@1359 -- # cs=4194304 00:24:25.070 14:38:31 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:24:25.070 14:38:31 -- common/autotest_common.sh@1363 -- # echo 5112 00:24:25.070 14:38:31 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:24:25.070 14:38:31 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u df52e702-161d-4ce8-b526-9c83d898e75a lbd_0 5112 00:24:25.328 14:38:32 -- host/perf.sh@80 -- # lb_guid=ac1f5e60-1665-4349-bdae-ece1bc25f8e3 00:24:25.328 14:38:32 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore ac1f5e60-1665-4349-bdae-ece1bc25f8e3 lvs_n_0 00:24:25.895 14:38:32 -- host/perf.sh@83 -- # ls_nested_guid=252364c0-8427-45c5-b5da-141acb80ed62 00:24:25.895 14:38:32 -- host/perf.sh@84 -- # get_lvs_free_mb 252364c0-8427-45c5-b5da-141acb80ed62 00:24:25.895 14:38:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=252364c0-8427-45c5-b5da-141acb80ed62 00:24:25.895 14:38:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:24:25.895 14:38:32 -- common/autotest_common.sh@1355 -- # local fc 00:24:25.895 14:38:32 -- common/autotest_common.sh@1356 -- # local cs 00:24:25.895 14:38:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:26.153 14:38:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:24:26.153 { 00:24:26.153 "base_bdev": "Nvme0n1", 00:24:26.153 "block_size": 4096, 00:24:26.153 "cluster_size": 4194304, 00:24:26.153 "free_clusters": 0, 00:24:26.153 "name": "lvs_0", 00:24:26.153 "total_data_clusters": 1278, 00:24:26.153 "uuid": "df52e702-161d-4ce8-b526-9c83d898e75a" 00:24:26.153 }, 00:24:26.153 { 00:24:26.153 "base_bdev": "ac1f5e60-1665-4349-bdae-ece1bc25f8e3", 00:24:26.153 "block_size": 4096, 00:24:26.153 "cluster_size": 4194304, 00:24:26.153 "free_clusters": 1276, 00:24:26.153 "name": "lvs_n_0", 00:24:26.153 "total_data_clusters": 1276, 00:24:26.153 "uuid": "252364c0-8427-45c5-b5da-141acb80ed62" 00:24:26.153 } 00:24:26.153 ]' 00:24:26.153 14:38:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="252364c0-8427-45c5-b5da-141acb80ed62") .free_clusters' 00:24:26.153 14:38:32 -- common/autotest_common.sh@1358 -- # fc=1276 00:24:26.153 14:38:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="252364c0-8427-45c5-b5da-141acb80ed62") .cluster_size' 00:24:26.153 5104 00:24:26.153 14:38:32 -- common/autotest_common.sh@1359 -- # cs=4194304 00:24:26.153 14:38:32 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:24:26.153 14:38:32 -- common/autotest_common.sh@1363 -- # echo 5104 00:24:26.153 14:38:32 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:24:26.153 14:38:32 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 252364c0-8427-45c5-b5da-141acb80ed62 lbd_nest_0 5104 00:24:26.412 14:38:33 -- host/perf.sh@88 -- # lb_nested_guid=d645d529-9f05-44cb-93b5-ce2df75a59f6 00:24:26.412 14:38:33 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.671 14:38:33 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:26.671 14:38:33 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d645d529-9f05-44cb-93b5-ce2df75a59f6 00:24:26.929 14:38:33 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.187 14:38:34 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:27.187 14:38:34 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:27.187 14:38:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:27.188 14:38:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:27.188 14:38:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.447 No valid NVMe controllers or AIO or URING devices found 00:24:27.447 Initializing NVMe Controllers 00:24:27.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.447 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:27.447 WARNING: Some requested NVMe devices were skipped 00:24:27.447 14:38:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:27.447 14:38:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.646 Initializing NVMe Controllers 00:24:39.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.646 Initialization complete. Launching workers. 00:24:39.646 ======================================================== 00:24:39.646 Latency(us) 00:24:39.646 Device Information : IOPS MiB/s Average min max 00:24:39.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 812.24 101.53 1230.77 378.83 8534.46 00:24:39.646 ======================================================== 00:24:39.646 Total : 812.24 101.53 1230.77 378.83 8534.46 00:24:39.646 00:24:39.646 14:38:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:39.646 14:38:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:39.646 14:38:44 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.646 No valid NVMe controllers or AIO or URING devices found 00:24:39.646 Initializing NVMe Controllers 00:24:39.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.646 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:39.646 WARNING: Some requested NVMe devices were skipped 00:24:39.646 14:38:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:39.646 14:38:44 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:49.616 Initializing NVMe Controllers 00:24:49.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.616 Initialization complete. Launching workers. 00:24:49.616 ======================================================== 00:24:49.616 Latency(us) 00:24:49.616 Device Information : IOPS MiB/s Average min max 00:24:49.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1113.36 139.17 28776.73 8104.35 278073.42 00:24:49.616 ======================================================== 00:24:49.616 Total : 1113.36 139.17 28776.73 8104.35 278073.42 00:24:49.616 00:24:49.616 14:38:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:49.616 14:38:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:49.616 14:38:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:49.616 No valid NVMe controllers or AIO or URING devices found 00:24:49.616 Initializing NVMe Controllers 00:24:49.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.616 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:49.616 WARNING: Some requested NVMe devices were skipped 00:24:49.616 14:38:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:49.616 14:38:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.650 Initializing NVMe Controllers 00:24:59.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.650 Controller IO queue size 128, less than required. 00:24:59.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:59.650 Initialization complete. Launching workers. 00:24:59.650 ======================================================== 00:24:59.650 Latency(us) 00:24:59.650 Device Information : IOPS MiB/s Average min max 00:24:59.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3530.40 441.30 36306.92 13174.17 87392.28 00:24:59.650 ======================================================== 00:24:59.650 Total : 3530.40 441.30 36306.92 13174.17 87392.28 00:24:59.650 00:24:59.650 14:39:05 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.650 14:39:06 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d645d529-9f05-44cb-93b5-ce2df75a59f6 00:24:59.651 14:39:06 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:59.908 14:39:06 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ac1f5e60-1665-4349-bdae-ece1bc25f8e3 00:25:00.165 14:39:07 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:00.431 14:39:07 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:00.431 14:39:07 -- host/perf.sh@114 -- # nvmftestfini 00:25:00.431 14:39:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.431 14:39:07 -- nvmf/common.sh@116 -- # sync 00:25:00.431 14:39:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:00.431 14:39:07 -- nvmf/common.sh@119 -- # set +e 00:25:00.431 14:39:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.431 14:39:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:00.431 rmmod nvme_tcp 00:25:00.431 rmmod nvme_fabrics 00:25:00.431 rmmod nvme_keyring 00:25:00.431 14:39:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:00.431 14:39:07 -- nvmf/common.sh@123 -- # set -e 00:25:00.431 14:39:07 -- nvmf/common.sh@124 -- # return 0 00:25:00.431 14:39:07 -- nvmf/common.sh@477 -- # '[' -n 83542 ']' 00:25:00.431 14:39:07 -- nvmf/common.sh@478 -- # killprocess 83542 00:25:00.431 14:39:07 -- common/autotest_common.sh@936 -- # '[' -z 83542 ']' 00:25:00.431 14:39:07 -- common/autotest_common.sh@940 -- # kill -0 83542 00:25:00.431 14:39:07 -- common/autotest_common.sh@941 -- # uname 00:25:00.431 14:39:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.431 14:39:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83542 00:25:00.431 14:39:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:00.431 14:39:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:00.431 killing process with pid 83542 00:25:00.431 14:39:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83542' 00:25:00.431 14:39:07 -- common/autotest_common.sh@955 -- # kill 83542 00:25:00.431 14:39:07 -- common/autotest_common.sh@960 -- # wait 83542 00:25:02.344 14:39:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:02.344 14:39:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:02.344 14:39:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:02.344 14:39:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.344 14:39:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:02.344 14:39:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.344 14:39:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.344 14:39:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.344 14:39:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:02.344 00:25:02.344 real 0m51.368s 00:25:02.344 user 3m11.699s 00:25:02.344 sys 0m10.891s 00:25:02.344 14:39:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:02.344 ************************************ 00:25:02.344 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:25:02.344 END TEST nvmf_perf 00:25:02.344 ************************************ 00:25:02.344 14:39:09 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:02.344 14:39:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:02.344 14:39:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:02.344 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:25:02.344 ************************************ 00:25:02.344 START TEST nvmf_fio_host 00:25:02.344 ************************************ 00:25:02.344 14:39:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:02.344 * Looking for test storage... 00:25:02.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:02.344 14:39:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:02.344 14:39:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:02.344 14:39:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:02.344 14:39:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:02.344 14:39:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:02.344 14:39:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:02.344 14:39:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:02.344 14:39:09 -- scripts/common.sh@335 -- # IFS=.-: 00:25:02.344 14:39:09 -- scripts/common.sh@335 -- # read -ra ver1 00:25:02.344 14:39:09 -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.344 14:39:09 -- scripts/common.sh@336 -- # read -ra ver2 00:25:02.344 14:39:09 -- scripts/common.sh@337 -- # local 'op=<' 00:25:02.344 14:39:09 -- scripts/common.sh@339 -- # ver1_l=2 00:25:02.344 14:39:09 -- scripts/common.sh@340 -- # ver2_l=1 00:25:02.344 14:39:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:02.344 14:39:09 -- scripts/common.sh@343 -- # case "$op" in 00:25:02.344 14:39:09 -- scripts/common.sh@344 -- # : 1 00:25:02.344 14:39:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:02.344 14:39:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.344 14:39:09 -- scripts/common.sh@364 -- # decimal 1 00:25:02.344 14:39:09 -- scripts/common.sh@352 -- # local d=1 00:25:02.344 14:39:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.344 14:39:09 -- scripts/common.sh@354 -- # echo 1 00:25:02.344 14:39:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:02.344 14:39:09 -- scripts/common.sh@365 -- # decimal 2 00:25:02.344 14:39:09 -- scripts/common.sh@352 -- # local d=2 00:25:02.344 14:39:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.344 14:39:09 -- scripts/common.sh@354 -- # echo 2 00:25:02.344 14:39:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:02.344 14:39:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:02.344 14:39:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:02.344 14:39:09 -- scripts/common.sh@367 -- # return 0 00:25:02.344 14:39:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.344 14:39:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.344 --rc genhtml_branch_coverage=1 00:25:02.344 --rc genhtml_function_coverage=1 00:25:02.344 --rc genhtml_legend=1 00:25:02.344 --rc geninfo_all_blocks=1 00:25:02.344 --rc geninfo_unexecuted_blocks=1 00:25:02.344 00:25:02.344 ' 00:25:02.344 14:39:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.344 --rc genhtml_branch_coverage=1 00:25:02.344 --rc genhtml_function_coverage=1 00:25:02.344 --rc genhtml_legend=1 00:25:02.344 --rc geninfo_all_blocks=1 00:25:02.344 --rc geninfo_unexecuted_blocks=1 00:25:02.344 00:25:02.344 ' 00:25:02.344 14:39:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.344 --rc genhtml_branch_coverage=1 00:25:02.344 --rc genhtml_function_coverage=1 00:25:02.344 --rc genhtml_legend=1 00:25:02.344 --rc geninfo_all_blocks=1 00:25:02.344 --rc geninfo_unexecuted_blocks=1 00:25:02.344 00:25:02.344 ' 00:25:02.344 14:39:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:02.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.344 --rc genhtml_branch_coverage=1 00:25:02.344 --rc genhtml_function_coverage=1 00:25:02.344 --rc genhtml_legend=1 00:25:02.344 --rc geninfo_all_blocks=1 00:25:02.344 --rc geninfo_unexecuted_blocks=1 00:25:02.344 00:25:02.344 ' 00:25:02.344 14:39:09 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:02.344 14:39:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.344 14:39:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.344 14:39:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.344 14:39:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- paths/export.sh@5 -- # export PATH 00:25:02.345 14:39:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:02.345 14:39:09 -- nvmf/common.sh@7 -- # uname -s 00:25:02.345 14:39:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.345 14:39:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.345 14:39:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.345 14:39:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.345 14:39:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.345 14:39:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.345 14:39:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.345 14:39:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.345 14:39:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.345 14:39:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.345 14:39:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:25:02.345 14:39:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:25:02.345 14:39:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.345 14:39:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.345 14:39:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:02.345 14:39:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:02.345 14:39:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.345 14:39:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.345 14:39:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.345 14:39:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- paths/export.sh@5 -- # export PATH 00:25:02.345 14:39:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.345 14:39:09 -- nvmf/common.sh@46 -- # : 0 00:25:02.345 14:39:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:02.345 14:39:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:02.345 14:39:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:02.345 14:39:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.345 14:39:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.345 14:39:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:02.345 14:39:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:02.345 14:39:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:02.345 14:39:09 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.345 14:39:09 -- host/fio.sh@14 -- # nvmftestinit 00:25:02.345 14:39:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:02.345 14:39:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.345 14:39:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:02.345 14:39:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:02.345 14:39:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:02.345 14:39:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.345 14:39:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.345 14:39:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.345 14:39:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:02.345 14:39:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:02.345 14:39:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:02.345 14:39:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:02.345 14:39:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:02.345 14:39:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:02.345 14:39:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.345 14:39:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.345 14:39:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:02.345 14:39:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:02.345 14:39:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:02.345 14:39:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:02.345 14:39:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:02.345 14:39:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.345 14:39:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:02.345 14:39:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:02.345 14:39:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:02.345 14:39:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:02.345 14:39:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:02.345 14:39:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:02.345 Cannot find device "nvmf_tgt_br" 00:25:02.345 14:39:09 -- nvmf/common.sh@154 -- # true 00:25:02.345 14:39:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:02.345 Cannot find device "nvmf_tgt_br2" 00:25:02.345 14:39:09 -- nvmf/common.sh@155 -- # true 00:25:02.345 14:39:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:02.604 14:39:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:02.604 Cannot find device "nvmf_tgt_br" 00:25:02.604 14:39:09 -- nvmf/common.sh@157 -- # true 00:25:02.604 14:39:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:02.604 Cannot find device "nvmf_tgt_br2" 00:25:02.604 14:39:09 -- nvmf/common.sh@158 -- # true 00:25:02.604 14:39:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:02.604 14:39:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:02.604 14:39:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:02.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.604 14:39:09 -- nvmf/common.sh@161 -- # true 00:25:02.604 14:39:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:02.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.604 14:39:09 -- nvmf/common.sh@162 -- # true 00:25:02.604 14:39:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:02.604 14:39:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:02.604 14:39:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:02.604 14:39:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:02.604 14:39:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:02.604 14:39:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:02.604 14:39:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:02.604 14:39:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:02.604 14:39:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:02.604 14:39:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:02.604 14:39:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:02.604 14:39:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:02.604 14:39:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:02.604 14:39:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:02.604 14:39:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:02.604 14:39:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:02.604 14:39:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:02.604 14:39:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:02.604 14:39:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:02.604 14:39:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:02.604 14:39:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:02.863 14:39:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:02.863 14:39:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:02.863 14:39:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:02.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:25:02.863 00:25:02.863 --- 10.0.0.2 ping statistics --- 00:25:02.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.863 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:02.863 14:39:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:02.863 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:02.863 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:25:02.863 00:25:02.863 --- 10.0.0.3 ping statistics --- 00:25:02.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.863 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:02.863 14:39:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:02.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:02.863 00:25:02.863 --- 10.0.0.1 ping statistics --- 00:25:02.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.863 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:02.863 14:39:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.863 14:39:09 -- nvmf/common.sh@421 -- # return 0 00:25:02.863 14:39:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.863 14:39:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.863 14:39:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.863 14:39:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.863 14:39:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.863 14:39:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.863 14:39:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.863 14:39:09 -- host/fio.sh@16 -- # [[ y != y ]] 00:25:02.863 14:39:09 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:02.863 14:39:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.863 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:25:02.863 14:39:09 -- host/fio.sh@24 -- # nvmfpid=84518 00:25:02.863 14:39:09 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:02.863 14:39:09 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.863 14:39:09 -- host/fio.sh@28 -- # waitforlisten 84518 00:25:02.863 14:39:09 -- common/autotest_common.sh@829 -- # '[' -z 84518 ']' 00:25:02.863 14:39:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.863 14:39:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.863 14:39:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.863 14:39:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.863 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:25:02.863 [2024-12-06 14:39:09.686320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:02.863 [2024-12-06 14:39:09.686428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.863 [2024-12-06 14:39:09.825708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.122 [2024-12-06 14:39:09.939213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:03.122 [2024-12-06 14:39:09.939350] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.122 [2024-12-06 14:39:09.939362] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.122 [2024-12-06 14:39:09.939370] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.122 [2024-12-06 14:39:09.939583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.122 [2024-12-06 14:39:09.940693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.122 [2024-12-06 14:39:09.940833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.122 [2024-12-06 14:39:09.940958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.689 14:39:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.689 14:39:10 -- common/autotest_common.sh@862 -- # return 0 00:25:03.689 14:39:10 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.947 [2024-12-06 14:39:10.864959] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.947 14:39:10 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:03.947 14:39:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.947 14:39:10 -- common/autotest_common.sh@10 -- # set +x 00:25:04.205 14:39:10 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:04.463 Malloc1 00:25:04.463 14:39:11 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.722 14:39:11 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:04.981 14:39:11 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.240 [2024-12-06 14:39:11.958517] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.240 14:39:11 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.500 14:39:12 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:05.500 14:39:12 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.500 14:39:12 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.500 14:39:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:05.500 14:39:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.500 14:39:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:05.500 14:39:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:05.500 14:39:12 -- common/autotest_common.sh@1330 -- # shift 00:25:05.500 14:39:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:05.500 14:39:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:05.500 14:39:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:05.500 14:39:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:05.500 14:39:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:05.500 14:39:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:05.500 14:39:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:05.500 14:39:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.500 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:05.500 fio-3.35 00:25:05.500 Starting 1 thread 00:25:08.030 00:25:08.030 test: (groupid=0, jobs=1): err= 0: pid=84649: Fri Dec 6 14:39:14 2024 00:25:08.030 read: IOPS=9504, BW=37.1MiB/s (38.9MB/s)(74.5MiB/2006msec) 00:25:08.030 slat (nsec): min=1886, max=346658, avg=2623.40, stdev=3492.32 00:25:08.030 clat (usec): min=3380, max=12410, avg=7175.47, stdev=766.31 00:25:08.030 lat (usec): min=3427, max=12414, avg=7178.09, stdev=766.30 00:25:08.030 clat percentiles (usec): 00:25:08.030 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6587], 00:25:08.030 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:25:08.030 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8094], 95.00th=[ 8455], 00:25:08.030 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[11076], 99.95th=[11207], 00:25:08.030 | 99.99th=[12387] 00:25:08.030 bw ( KiB/s): min=36552, max=39736, per=99.95%, avg=38000.00, stdev=1314.11, samples=4 00:25:08.030 iops : min= 9138, max= 9934, avg=9500.00, stdev=328.53, samples=4 00:25:08.030 write: IOPS=9513, BW=37.2MiB/s (39.0MB/s)(74.5MiB/2006msec); 0 zone resets 00:25:08.030 slat (nsec): min=1981, max=240809, avg=2726.71, stdev=2500.16 00:25:08.030 clat (usec): min=2428, max=11202, avg=6252.80, stdev=648.14 00:25:08.030 lat (usec): min=2441, max=11205, avg=6255.53, stdev=648.17 00:25:08.030 clat percentiles (usec): 00:25:08.030 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:25:08.030 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6390], 00:25:08.030 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 7046], 95.00th=[ 7308], 00:25:08.030 | 99.00th=[ 7898], 99.50th=[ 8356], 99.90th=[10159], 99.95th=[10552], 00:25:08.030 | 99.99th=[11207] 00:25:08.030 bw ( KiB/s): min=36928, max=39552, per=99.98%, avg=38048.00, stdev=1126.25, samples=4 00:25:08.030 iops : min= 9232, max= 9888, avg=9512.00, stdev=281.56, samples=4 00:25:08.030 lat (msec) : 4=0.13%, 10=99.58%, 20=0.28% 00:25:08.030 cpu : usr=64.39%, sys=25.44%, ctx=12, majf=0, minf=5 00:25:08.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:08.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:08.030 issued rwts: total=19067,19084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:08.030 00:25:08.030 Run status group 0 (all jobs): 00:25:08.030 READ: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.5MiB (78.1MB), run=2006-2006msec 00:25:08.030 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.5MiB (78.2MB), run=2006-2006msec 00:25:08.030 14:39:14 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:08.030 14:39:14 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:08.030 14:39:14 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:08.030 14:39:14 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.030 14:39:14 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:08.030 14:39:14 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:08.030 14:39:14 -- common/autotest_common.sh@1330 -- # shift 00:25:08.030 14:39:14 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:08.030 14:39:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:08.030 14:39:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:08.030 14:39:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:08.030 14:39:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:08.030 14:39:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:08.030 14:39:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:08.030 14:39:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:08.030 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:08.030 fio-3.35 00:25:08.030 Starting 1 thread 00:25:10.564 00:25:10.564 test: (groupid=0, jobs=1): err= 0: pid=84699: Fri Dec 6 14:39:17 2024 00:25:10.564 read: IOPS=7548, BW=118MiB/s (124MB/s)(236MiB/2004msec) 00:25:10.564 slat (usec): min=2, max=140, avg= 4.02, stdev= 2.87 00:25:10.564 clat (usec): min=2314, max=20636, avg=10084.51, stdev=2789.74 00:25:10.564 lat (usec): min=2318, max=20641, avg=10088.53, stdev=2790.05 00:25:10.564 clat percentiles (usec): 00:25:10.564 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7439], 00:25:10.564 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10814], 00:25:10.564 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13829], 95.00th=[14746], 00:25:10.564 | 99.00th=[16581], 99.50th=[17433], 99.90th=[20055], 99.95th=[20317], 00:25:10.564 | 99.99th=[20579] 00:25:10.564 bw ( KiB/s): min=59872, max=63776, per=51.09%, avg=61713.00, stdev=1598.95, samples=4 00:25:10.564 iops : min= 3742, max= 3986, avg=3857.00, stdev=99.94, samples=4 00:25:10.564 write: IOPS=4378, BW=68.4MiB/s (71.7MB/s)(126MiB/1840msec); 0 zone resets 00:25:10.564 slat (usec): min=29, max=382, avg=38.96, stdev=11.07 00:25:10.564 clat (usec): min=2424, max=22338, avg=11951.27, stdev=2543.54 00:25:10.564 lat (usec): min=2455, max=22382, avg=11990.22, stdev=2546.23 00:25:10.564 clat percentiles (usec): 00:25:10.564 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:25:10.564 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:25:10.564 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15270], 95.00th=[16319], 00:25:10.564 | 99.00th=[19530], 99.50th=[20579], 99.90th=[21890], 99.95th=[22152], 00:25:10.564 | 99.99th=[22414] 00:25:10.564 bw ( KiB/s): min=61472, max=66496, per=91.42%, avg=64047.75, stdev=2053.41, samples=4 00:25:10.564 iops : min= 3842, max= 4156, avg=4002.75, stdev=128.34, samples=4 00:25:10.564 lat (msec) : 4=0.23%, 10=41.21%, 20=58.18%, 50=0.38% 00:25:10.564 cpu : usr=69.00%, sys=19.32%, ctx=6, majf=0, minf=1 00:25:10.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:10.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.564 issued rwts: total=15128,8057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.564 00:25:10.564 Run status group 0 (all jobs): 00:25:10.564 READ: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=236MiB (248MB), run=2004-2004msec 00:25:10.564 WRITE: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=126MiB (132MB), run=1840-1840msec 00:25:10.564 14:39:17 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.564 14:39:17 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:25:10.564 14:39:17 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:25:10.564 14:39:17 -- host/fio.sh@51 -- # get_nvme_bdfs 00:25:10.564 14:39:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:10.564 14:39:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:10.564 14:39:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:10.564 14:39:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:10.564 14:39:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:10.823 14:39:17 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:10.823 14:39:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:10.823 14:39:17 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:25:11.081 Nvme0n1 00:25:11.081 14:39:17 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:25:11.339 14:39:18 -- host/fio.sh@53 -- # ls_guid=55efbcf9-333a-4a0a-be33-7f5620f4eefd 00:25:11.339 14:39:18 -- host/fio.sh@54 -- # get_lvs_free_mb 55efbcf9-333a-4a0a-be33-7f5620f4eefd 00:25:11.339 14:39:18 -- common/autotest_common.sh@1353 -- # local lvs_uuid=55efbcf9-333a-4a0a-be33-7f5620f4eefd 00:25:11.339 14:39:18 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:11.339 14:39:18 -- common/autotest_common.sh@1355 -- # local fc 00:25:11.339 14:39:18 -- common/autotest_common.sh@1356 -- # local cs 00:25:11.339 14:39:18 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:11.598 14:39:18 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:11.598 { 00:25:11.598 "base_bdev": "Nvme0n1", 00:25:11.598 "block_size": 4096, 00:25:11.598 "cluster_size": 1073741824, 00:25:11.598 "free_clusters": 4, 00:25:11.598 "name": "lvs_0", 00:25:11.598 "total_data_clusters": 4, 00:25:11.598 "uuid": "55efbcf9-333a-4a0a-be33-7f5620f4eefd" 00:25:11.598 } 00:25:11.598 ]' 00:25:11.598 14:39:18 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="55efbcf9-333a-4a0a-be33-7f5620f4eefd") .free_clusters' 00:25:11.598 14:39:18 -- common/autotest_common.sh@1358 -- # fc=4 00:25:11.857 14:39:18 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="55efbcf9-333a-4a0a-be33-7f5620f4eefd") .cluster_size' 00:25:11.857 4096 00:25:11.857 14:39:18 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:25:11.857 14:39:18 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:25:11.857 14:39:18 -- common/autotest_common.sh@1363 -- # echo 4096 00:25:11.857 14:39:18 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:25:12.115 a1fd33fb-37f0-41a3-b871-7b65a72898db 00:25:12.115 14:39:18 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:25:12.389 14:39:19 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:25:12.648 14:39:19 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:12.906 14:39:19 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:12.906 14:39:19 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:12.906 14:39:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:12.906 14:39:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:12.906 14:39:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:12.906 14:39:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:12.906 14:39:19 -- common/autotest_common.sh@1330 -- # shift 00:25:12.906 14:39:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:12.906 14:39:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:12.906 14:39:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:12.906 14:39:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:12.906 14:39:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:12.906 14:39:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:12.906 14:39:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:12.907 14:39:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:12.907 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:12.907 fio-3.35 00:25:12.907 Starting 1 thread 00:25:15.438 00:25:15.438 test: (groupid=0, jobs=1): err= 0: pid=84857: Fri Dec 6 14:39:22 2024 00:25:15.438 read: IOPS=6020, BW=23.5MiB/s (24.7MB/s)(47.2MiB/2009msec) 00:25:15.438 slat (nsec): min=1782, max=402281, avg=2674.89, stdev=4931.17 00:25:15.438 clat (usec): min=4499, max=17895, avg=11200.86, stdev=1112.92 00:25:15.438 lat (usec): min=4508, max=17897, avg=11203.54, stdev=1112.68 00:25:15.438 clat percentiles (usec): 00:25:15.438 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:25:15.438 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:25:15.438 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12518], 95.00th=[13042], 00:25:15.438 | 99.00th=[14091], 99.50th=[14615], 99.90th=[17433], 99.95th=[17695], 00:25:15.438 | 99.99th=[17957] 00:25:15.438 bw ( KiB/s): min=22770, max=24952, per=99.86%, avg=24048.50, stdev=940.99, samples=4 00:25:15.438 iops : min= 5692, max= 6238, avg=6012.00, stdev=235.47, samples=4 00:25:15.438 write: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(47.1MiB/2009msec); 0 zone resets 00:25:15.438 slat (nsec): min=1908, max=301948, avg=2793.49, stdev=3646.60 00:25:15.438 clat (usec): min=2610, max=17182, avg=9972.49, stdev=979.25 00:25:15.438 lat (usec): min=2624, max=17185, avg=9975.29, stdev=979.19 00:25:15.438 clat percentiles (usec): 00:25:15.438 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:25:15.438 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:25:15.438 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11469], 00:25:15.438 | 99.00th=[12256], 99.50th=[12780], 99.90th=[16057], 99.95th=[16450], 00:25:15.438 | 99.99th=[17171] 00:25:15.438 bw ( KiB/s): min=23512, max=24384, per=99.88%, avg=23990.00, stdev=429.39, samples=4 00:25:15.438 iops : min= 5878, max= 6096, avg=5997.50, stdev=107.35, samples=4 00:25:15.438 lat (msec) : 4=0.04%, 10=32.35%, 20=67.61% 00:25:15.438 cpu : usr=72.36%, sys=21.41%, ctx=35, majf=0, minf=5 00:25:15.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:15.438 issued rwts: total=12095,12063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:15.438 00:25:15.438 Run status group 0 (all jobs): 00:25:15.438 READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.2MiB (49.5MB), run=2009-2009msec 00:25:15.438 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2009-2009msec 00:25:15.438 14:39:22 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:15.438 14:39:22 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:25:16.003 14:39:22 -- host/fio.sh@64 -- # ls_nested_guid=22918a00-c201-4d62-9ad7-f5c3e5cb2f82 00:25:16.003 14:39:22 -- host/fio.sh@65 -- # get_lvs_free_mb 22918a00-c201-4d62-9ad7-f5c3e5cb2f82 00:25:16.003 14:39:22 -- common/autotest_common.sh@1353 -- # local lvs_uuid=22918a00-c201-4d62-9ad7-f5c3e5cb2f82 00:25:16.003 14:39:22 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:16.003 14:39:22 -- common/autotest_common.sh@1355 -- # local fc 00:25:16.003 14:39:22 -- common/autotest_common.sh@1356 -- # local cs 00:25:16.003 14:39:22 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:16.003 14:39:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:16.003 { 00:25:16.003 "base_bdev": "Nvme0n1", 00:25:16.003 "block_size": 4096, 00:25:16.003 "cluster_size": 1073741824, 00:25:16.003 "free_clusters": 0, 00:25:16.003 "name": "lvs_0", 00:25:16.003 "total_data_clusters": 4, 00:25:16.003 "uuid": "55efbcf9-333a-4a0a-be33-7f5620f4eefd" 00:25:16.003 }, 00:25:16.003 { 00:25:16.003 "base_bdev": "a1fd33fb-37f0-41a3-b871-7b65a72898db", 00:25:16.003 "block_size": 4096, 00:25:16.003 "cluster_size": 4194304, 00:25:16.003 "free_clusters": 1022, 00:25:16.003 "name": "lvs_n_0", 00:25:16.003 "total_data_clusters": 1022, 00:25:16.003 "uuid": "22918a00-c201-4d62-9ad7-f5c3e5cb2f82" 00:25:16.003 } 00:25:16.003 ]' 00:25:16.003 14:39:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="22918a00-c201-4d62-9ad7-f5c3e5cb2f82") .free_clusters' 00:25:16.261 14:39:22 -- common/autotest_common.sh@1358 -- # fc=1022 00:25:16.261 14:39:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="22918a00-c201-4d62-9ad7-f5c3e5cb2f82") .cluster_size' 00:25:16.261 4088 00:25:16.261 14:39:23 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:16.261 14:39:23 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:25:16.261 14:39:23 -- common/autotest_common.sh@1363 -- # echo 4088 00:25:16.261 14:39:23 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:25:16.519 90daa3e8-8ee3-4109-81a1-c2ec1a33f856 00:25:16.519 14:39:23 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:25:16.776 14:39:23 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:25:17.032 14:39:23 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:17.289 14:39:24 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.289 14:39:24 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.289 14:39:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:17.289 14:39:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.289 14:39:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:17.289 14:39:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.289 14:39:24 -- common/autotest_common.sh@1330 -- # shift 00:25:17.289 14:39:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:17.289 14:39:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:17.289 14:39:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:17.289 14:39:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.289 14:39:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:17.289 14:39:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:17.289 14:39:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:17.289 14:39:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.289 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:17.289 fio-3.35 00:25:17.289 Starting 1 thread 00:25:19.873 00:25:19.873 test: (groupid=0, jobs=1): err= 0: pid=84972: Fri Dec 6 14:39:26 2024 00:25:19.873 read: IOPS=5321, BW=20.8MiB/s (21.8MB/s)(41.8MiB/2010msec) 00:25:19.873 slat (nsec): min=1825, max=329299, avg=2818.76, stdev=4657.69 00:25:19.873 clat (usec): min=4793, max=22211, avg=12722.61, stdev=1281.41 00:25:19.873 lat (usec): min=4803, max=22214, avg=12725.42, stdev=1281.05 00:25:19.873 clat percentiles (usec): 00:25:19.873 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:25:19.873 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:25:19.873 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14353], 95.00th=[14877], 00:25:19.873 | 99.00th=[16057], 99.50th=[16581], 99.90th=[18482], 99.95th=[20579], 00:25:19.873 | 99.99th=[22152] 00:25:19.873 bw ( KiB/s): min=20622, max=22088, per=99.79%, avg=21241.50, stdev=612.67, samples=4 00:25:19.873 iops : min= 5155, max= 5522, avg=5310.25, stdev=153.34, samples=4 00:25:19.873 write: IOPS=5306, BW=20.7MiB/s (21.7MB/s)(41.7MiB/2010msec); 0 zone resets 00:25:19.873 slat (nsec): min=1968, max=311358, avg=2954.81, stdev=3795.70 00:25:19.873 clat (usec): min=2504, max=20447, avg=11282.53, stdev=1114.57 00:25:19.873 lat (usec): min=2518, max=20450, avg=11285.49, stdev=1114.43 00:25:19.873 clat percentiles (usec): 00:25:19.873 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:25:19.873 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:25:19.873 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[13042], 00:25:19.873 | 99.00th=[13698], 99.50th=[14091], 99.90th=[18744], 99.95th=[20055], 00:25:19.873 | 99.99th=[20317] 00:25:19.873 bw ( KiB/s): min=20608, max=21596, per=99.91%, avg=21207.00, stdev=470.19, samples=4 00:25:19.873 iops : min= 5152, max= 5399, avg=5301.75, stdev=117.55, samples=4 00:25:19.873 lat (msec) : 4=0.04%, 10=5.49%, 20=94.39%, 50=0.08% 00:25:19.873 cpu : usr=73.17%, sys=21.25%, ctx=3, majf=0, minf=5 00:25:19.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:19.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:19.873 issued rwts: total=10696,10666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:19.873 00:25:19.873 Run status group 0 (all jobs): 00:25:19.873 READ: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=41.8MiB (43.8MB), run=2010-2010msec 00:25:19.873 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.7MiB (43.7MB), run=2010-2010msec 00:25:19.873 14:39:26 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:19.873 14:39:26 -- host/fio.sh@74 -- # sync 00:25:20.130 14:39:26 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:25:20.388 14:39:27 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:20.645 14:39:27 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:25:20.903 14:39:27 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:21.160 14:39:27 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:25:22.092 14:39:28 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:22.092 14:39:28 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:22.092 14:39:28 -- host/fio.sh@86 -- # nvmftestfini 00:25:22.092 14:39:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:22.092 14:39:28 -- nvmf/common.sh@116 -- # sync 00:25:22.092 14:39:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:22.092 14:39:28 -- nvmf/common.sh@119 -- # set +e 00:25:22.092 14:39:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:22.092 14:39:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:22.092 rmmod nvme_tcp 00:25:22.093 rmmod nvme_fabrics 00:25:22.093 rmmod nvme_keyring 00:25:22.093 14:39:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.093 14:39:28 -- nvmf/common.sh@123 -- # set -e 00:25:22.093 14:39:28 -- nvmf/common.sh@124 -- # return 0 00:25:22.093 14:39:28 -- nvmf/common.sh@477 -- # '[' -n 84518 ']' 00:25:22.093 14:39:28 -- nvmf/common.sh@478 -- # killprocess 84518 00:25:22.093 14:39:28 -- common/autotest_common.sh@936 -- # '[' -z 84518 ']' 00:25:22.093 14:39:28 -- common/autotest_common.sh@940 -- # kill -0 84518 00:25:22.093 14:39:28 -- common/autotest_common.sh@941 -- # uname 00:25:22.093 14:39:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.093 14:39:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84518 00:25:22.093 killing process with pid 84518 00:25:22.093 14:39:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:22.093 14:39:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:22.093 14:39:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84518' 00:25:22.093 14:39:28 -- common/autotest_common.sh@955 -- # kill 84518 00:25:22.093 14:39:28 -- common/autotest_common.sh@960 -- # wait 84518 00:25:22.350 14:39:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.350 14:39:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.350 14:39:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.350 14:39:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.350 14:39:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.350 14:39:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.350 14:39:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.350 14:39:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.350 14:39:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:22.350 ************************************ 00:25:22.350 END TEST nvmf_fio_host 00:25:22.350 ************************************ 00:25:22.350 00:25:22.350 real 0m20.224s 00:25:22.350 user 1m28.373s 00:25:22.350 sys 0m4.558s 00:25:22.350 14:39:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.350 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.608 14:39:29 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:22.608 14:39:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:22.609 14:39:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:22.609 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.609 ************************************ 00:25:22.609 START TEST nvmf_failover 00:25:22.609 ************************************ 00:25:22.609 14:39:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:22.609 * Looking for test storage... 00:25:22.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:22.609 14:39:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:22.609 14:39:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:22.609 14:39:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:22.609 14:39:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:22.609 14:39:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:22.609 14:39:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:22.609 14:39:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:22.609 14:39:29 -- scripts/common.sh@335 -- # IFS=.-: 00:25:22.609 14:39:29 -- scripts/common.sh@335 -- # read -ra ver1 00:25:22.609 14:39:29 -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.609 14:39:29 -- scripts/common.sh@336 -- # read -ra ver2 00:25:22.609 14:39:29 -- scripts/common.sh@337 -- # local 'op=<' 00:25:22.609 14:39:29 -- scripts/common.sh@339 -- # ver1_l=2 00:25:22.609 14:39:29 -- scripts/common.sh@340 -- # ver2_l=1 00:25:22.609 14:39:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:22.609 14:39:29 -- scripts/common.sh@343 -- # case "$op" in 00:25:22.609 14:39:29 -- scripts/common.sh@344 -- # : 1 00:25:22.609 14:39:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:22.609 14:39:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.609 14:39:29 -- scripts/common.sh@364 -- # decimal 1 00:25:22.609 14:39:29 -- scripts/common.sh@352 -- # local d=1 00:25:22.609 14:39:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.609 14:39:29 -- scripts/common.sh@354 -- # echo 1 00:25:22.609 14:39:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:22.609 14:39:29 -- scripts/common.sh@365 -- # decimal 2 00:25:22.609 14:39:29 -- scripts/common.sh@352 -- # local d=2 00:25:22.609 14:39:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.609 14:39:29 -- scripts/common.sh@354 -- # echo 2 00:25:22.609 14:39:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:22.609 14:39:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:22.609 14:39:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:22.609 14:39:29 -- scripts/common.sh@367 -- # return 0 00:25:22.609 14:39:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.609 14:39:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.609 --rc genhtml_branch_coverage=1 00:25:22.609 --rc genhtml_function_coverage=1 00:25:22.609 --rc genhtml_legend=1 00:25:22.609 --rc geninfo_all_blocks=1 00:25:22.609 --rc geninfo_unexecuted_blocks=1 00:25:22.609 00:25:22.609 ' 00:25:22.609 14:39:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.609 --rc genhtml_branch_coverage=1 00:25:22.609 --rc genhtml_function_coverage=1 00:25:22.609 --rc genhtml_legend=1 00:25:22.609 --rc geninfo_all_blocks=1 00:25:22.609 --rc geninfo_unexecuted_blocks=1 00:25:22.609 00:25:22.609 ' 00:25:22.609 14:39:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.609 --rc genhtml_branch_coverage=1 00:25:22.609 --rc genhtml_function_coverage=1 00:25:22.609 --rc genhtml_legend=1 00:25:22.609 --rc geninfo_all_blocks=1 00:25:22.609 --rc geninfo_unexecuted_blocks=1 00:25:22.609 00:25:22.609 ' 00:25:22.609 14:39:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.609 --rc genhtml_branch_coverage=1 00:25:22.609 --rc genhtml_function_coverage=1 00:25:22.609 --rc genhtml_legend=1 00:25:22.609 --rc geninfo_all_blocks=1 00:25:22.609 --rc geninfo_unexecuted_blocks=1 00:25:22.609 00:25:22.609 ' 00:25:22.609 14:39:29 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:22.609 14:39:29 -- nvmf/common.sh@7 -- # uname -s 00:25:22.609 14:39:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.609 14:39:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.609 14:39:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.609 14:39:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.609 14:39:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.609 14:39:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.609 14:39:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.609 14:39:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.609 14:39:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.609 14:39:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.609 14:39:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:25:22.609 14:39:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:25:22.609 14:39:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.609 14:39:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.609 14:39:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:22.609 14:39:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:22.609 14:39:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.609 14:39:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.609 14:39:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.609 14:39:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.609 14:39:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.609 14:39:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.609 14:39:29 -- paths/export.sh@5 -- # export PATH 00:25:22.609 14:39:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.609 14:39:29 -- nvmf/common.sh@46 -- # : 0 00:25:22.609 14:39:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:22.609 14:39:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:22.609 14:39:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:22.609 14:39:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.609 14:39:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.609 14:39:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:22.609 14:39:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:22.609 14:39:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:22.609 14:39:29 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:22.609 14:39:29 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:22.609 14:39:29 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:22.609 14:39:29 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:22.609 14:39:29 -- host/failover.sh@18 -- # nvmftestinit 00:25:22.609 14:39:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:22.609 14:39:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.609 14:39:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:22.609 14:39:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:22.609 14:39:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:22.609 14:39:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.609 14:39:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.609 14:39:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.609 14:39:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:22.609 14:39:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:22.609 14:39:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:22.609 14:39:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:22.609 14:39:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:22.609 14:39:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:22.609 14:39:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.609 14:39:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.609 14:39:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:22.609 14:39:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:22.609 14:39:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:22.609 14:39:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:22.609 14:39:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:22.609 14:39:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.609 14:39:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:22.609 14:39:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:22.609 14:39:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:22.609 14:39:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:22.609 14:39:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:22.609 14:39:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:22.868 Cannot find device "nvmf_tgt_br" 00:25:22.868 14:39:29 -- nvmf/common.sh@154 -- # true 00:25:22.868 14:39:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:22.868 Cannot find device "nvmf_tgt_br2" 00:25:22.868 14:39:29 -- nvmf/common.sh@155 -- # true 00:25:22.868 14:39:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:22.868 14:39:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:22.868 Cannot find device "nvmf_tgt_br" 00:25:22.868 14:39:29 -- nvmf/common.sh@157 -- # true 00:25:22.868 14:39:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:22.868 Cannot find device "nvmf_tgt_br2" 00:25:22.868 14:39:29 -- nvmf/common.sh@158 -- # true 00:25:22.868 14:39:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:22.868 14:39:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:22.868 14:39:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:22.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.868 14:39:29 -- nvmf/common.sh@161 -- # true 00:25:22.868 14:39:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:22.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.868 14:39:29 -- nvmf/common.sh@162 -- # true 00:25:22.868 14:39:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:22.868 14:39:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:22.868 14:39:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:22.868 14:39:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:22.868 14:39:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:22.868 14:39:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:22.868 14:39:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:22.868 14:39:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:22.868 14:39:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:22.868 14:39:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:22.868 14:39:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:22.868 14:39:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:22.868 14:39:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:22.868 14:39:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:22.868 14:39:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:22.868 14:39:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:22.868 14:39:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:23.127 14:39:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:23.127 14:39:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:23.127 14:39:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:23.127 14:39:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:23.127 14:39:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:23.127 14:39:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:23.127 14:39:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:23.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:25:23.127 00:25:23.127 --- 10.0.0.2 ping statistics --- 00:25:23.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.127 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:23.127 14:39:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:23.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:23.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:25:23.127 00:25:23.127 --- 10.0.0.3 ping statistics --- 00:25:23.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.127 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:25:23.127 14:39:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:23.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:23.127 00:25:23.127 --- 10.0.0.1 ping statistics --- 00:25:23.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.127 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:23.127 14:39:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.127 14:39:29 -- nvmf/common.sh@421 -- # return 0 00:25:23.127 14:39:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:23.127 14:39:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.127 14:39:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:23.127 14:39:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:23.127 14:39:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.127 14:39:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:23.127 14:39:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:23.127 14:39:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:23.127 14:39:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:23.127 14:39:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.127 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:25:23.127 14:39:29 -- nvmf/common.sh@469 -- # nvmfpid=85259 00:25:23.127 14:39:29 -- nvmf/common.sh@470 -- # waitforlisten 85259 00:25:23.127 14:39:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:23.127 14:39:29 -- common/autotest_common.sh@829 -- # '[' -z 85259 ']' 00:25:23.127 14:39:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.127 14:39:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.127 14:39:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.127 14:39:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.127 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:25:23.127 [2024-12-06 14:39:29.982651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:23.127 [2024-12-06 14:39:29.982737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.387 [2024-12-06 14:39:30.114697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:23.387 [2024-12-06 14:39:30.246315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:23.387 [2024-12-06 14:39:30.246489] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.387 [2024-12-06 14:39:30.246504] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.387 [2024-12-06 14:39:30.246513] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.387 [2024-12-06 14:39:30.247567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.387 [2024-12-06 14:39:30.247694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.387 [2024-12-06 14:39:30.247704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.321 14:39:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.321 14:39:31 -- common/autotest_common.sh@862 -- # return 0 00:25:24.321 14:39:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:24.321 14:39:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.321 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:24.321 14:39:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.321 14:39:31 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:24.321 [2024-12-06 14:39:31.252887] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.321 14:39:31 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:24.580 Malloc0 00:25:24.580 14:39:31 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.838 14:39:31 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.097 14:39:32 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.355 [2024-12-06 14:39:32.223829] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.355 14:39:32 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:25.614 [2024-12-06 14:39:32.464209] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.614 14:39:32 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:25.872 [2024-12-06 14:39:32.700702] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:25.872 14:39:32 -- host/failover.sh@31 -- # bdevperf_pid=85371 00:25:25.872 14:39:32 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:25.872 14:39:32 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.872 14:39:32 -- host/failover.sh@34 -- # waitforlisten 85371 /var/tmp/bdevperf.sock 00:25:25.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.872 14:39:32 -- common/autotest_common.sh@829 -- # '[' -z 85371 ']' 00:25:25.872 14:39:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.872 14:39:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.872 14:39:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.872 14:39:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.872 14:39:32 -- common/autotest_common.sh@10 -- # set +x 00:25:27.247 14:39:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.247 14:39:33 -- common/autotest_common.sh@862 -- # return 0 00:25:27.247 14:39:33 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.247 NVMe0n1 00:25:27.247 14:39:34 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.505 00:25:27.505 14:39:34 -- host/failover.sh@39 -- # run_test_pid=85425 00:25:27.505 14:39:34 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.505 14:39:34 -- host/failover.sh@41 -- # sleep 1 00:25:28.887 14:39:35 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.887 [2024-12-06 14:39:35.695709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.887 [2024-12-06 14:39:35.695902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.695996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 [2024-12-06 14:39:35.696914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b5b0 is same with the state(5) to be set 00:25:28.888 14:39:35 -- host/failover.sh@45 -- # sleep 3 00:25:32.177 14:39:38 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.177 00:25:32.177 14:39:39 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:32.435 [2024-12-06 14:39:39.352610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.435 [2024-12-06 14:39:39.352670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.435 [2024-12-06 14:39:39.352680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.435 [2024-12-06 14:39:39.352690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.435 [2024-12-06 14:39:39.352697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.435 [2024-12-06 14:39:39.352707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.435 [2024-12-06 14:39:39.352716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.352992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 [2024-12-06 14:39:39.353060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2c420 is same with the state(5) to be set 00:25:32.436 14:39:39 -- host/failover.sh@50 -- # sleep 3 00:25:35.721 14:39:42 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.721 [2024-12-06 14:39:42.633311] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.721 14:39:42 -- host/failover.sh@55 -- # sleep 1 00:25:37.099 14:39:43 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:37.099 [2024-12-06 14:39:43.932902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.932990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 [2024-12-06 14:39:43.933530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cfb0 is same with the state(5) to be set 00:25:37.099 14:39:43 -- host/failover.sh@59 -- # wait 85425 00:25:43.661 0 00:25:43.661 14:39:49 -- host/failover.sh@61 -- # killprocess 85371 00:25:43.661 14:39:49 -- common/autotest_common.sh@936 -- # '[' -z 85371 ']' 00:25:43.661 14:39:49 -- common/autotest_common.sh@940 -- # kill -0 85371 00:25:43.661 14:39:49 -- common/autotest_common.sh@941 -- # uname 00:25:43.661 14:39:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.661 14:39:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85371 00:25:43.661 killing process with pid 85371 00:25:43.661 14:39:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:43.661 14:39:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:43.661 14:39:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85371' 00:25:43.661 14:39:49 -- common/autotest_common.sh@955 -- # kill 85371 00:25:43.661 14:39:49 -- common/autotest_common.sh@960 -- # wait 85371 00:25:43.661 14:39:49 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:43.661 [2024-12-06 14:39:32.783463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:43.661 [2024-12-06 14:39:32.783604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85371 ] 00:25:43.661 [2024-12-06 14:39:32.921337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.661 [2024-12-06 14:39:33.034686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.661 Running I/O for 15 seconds... 00:25:43.661 [2024-12-06 14:39:35.697403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.661 [2024-12-06 14:39:35.697553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.661 [2024-12-06 14:39:35.697587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.661 [2024-12-06 14:39:35.697604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.661 [2024-12-06 14:39:35.697620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.697963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.697988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.662 [2024-12-06 14:39:35.698814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.662 [2024-12-06 14:39:35.698881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.662 [2024-12-06 14:39:35.698894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.698908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.698921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.698935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.698948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.698963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.698976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.698990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.663 [2024-12-06 14:39:35.699861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.663 [2024-12-06 14:39:35.699952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.663 [2024-12-06 14:39:35.699965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.699978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.699992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.664 [2024-12-06 14:39:35.700174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.664 [2024-12-06 14:39:35.700201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.664 [2024-12-06 14:39:35.700229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.664 [2024-12-06 14:39:35.700864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.664 [2024-12-06 14:39:35.700891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.664 [2024-12-06 14:39:35.700924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.700978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.700992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.701007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.701021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.701034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.701047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.701074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.701090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.701123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.701139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.701152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.664 [2024-12-06 14:39:35.701166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.664 [2024-12-06 14:39:35.701180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.665 [2024-12-06 14:39:35.701207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.665 [2024-12-06 14:39:35.701261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.665 [2024-12-06 14:39:35.701289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:35.701552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16459a0 is same with the state(5) to be set 00:25:43.665 [2024-12-06 14:39:35.701589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:43.665 [2024-12-06 14:39:35.701601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:43.665 [2024-12-06 14:39:35.701617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120072 len:8 PRP1 0x0 PRP2 0x0 00:25:43.665 [2024-12-06 14:39:35.701631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701738] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16459a0 was disconnected and freed. reset controller. 00:25:43.665 [2024-12-06 14:39:35.701760] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:43.665 [2024-12-06 14:39:35.701821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.665 [2024-12-06 14:39:35.701843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.665 [2024-12-06 14:39:35.701871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.665 [2024-12-06 14:39:35.701900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.665 [2024-12-06 14:39:35.701926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:35.701955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.665 [2024-12-06 14:39:35.704148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.665 [2024-12-06 14:39:35.704186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0440 (9): Bad file descriptor 00:25:43.665 [2024-12-06 14:39:35.726295] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:43.665 [2024-12-06 14:39:39.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.665 [2024-12-06 14:39:39.353772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.665 [2024-12-06 14:39:39.353787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.353804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.353819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.353836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.353852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.353868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.353883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.353899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.353913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.353929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.353944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.353985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.666 [2024-12-06 14:39:39.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.666 [2024-12-06 14:39:39.354953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.666 [2024-12-06 14:39:39.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.354982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.355946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.355983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.355999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.667 [2024-12-06 14:39:39.356014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.356030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.356044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.356059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.356074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.356089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.356104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.356120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.667 [2024-12-06 14:39:39.356134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.667 [2024-12-06 14:39:39.356149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.356163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.356199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.356879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.356911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.356956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.356971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.356985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.357042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.357070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.357099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.357139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.357168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.357218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.357248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.668 [2024-12-06 14:39:39.357278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.668 [2024-12-06 14:39:39.357314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.668 [2024-12-06 14:39:39.357331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.669 [2024-12-06 14:39:39.357410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.669 [2024-12-06 14:39:39.357493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:39.357741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647890 is same with the state(5) to be set 00:25:43.669 [2024-12-06 14:39:39.357792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:43.669 [2024-12-06 14:39:39.357804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:43.669 [2024-12-06 14:39:39.357816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129976 len:8 PRP1 0x0 PRP2 0x0 00:25:43.669 [2024-12-06 14:39:39.357830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.357891] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1647890 was disconnected and freed. reset controller. 00:25:43.669 [2024-12-06 14:39:39.357919] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:43.669 [2024-12-06 14:39:39.358006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.669 [2024-12-06 14:39:39.358039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.358062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.669 [2024-12-06 14:39:39.358076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.358091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.669 [2024-12-06 14:39:39.358106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.358120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.669 [2024-12-06 14:39:39.358134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:39.358158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.669 [2024-12-06 14:39:39.360544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.669 [2024-12-06 14:39:39.360585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0440 (9): Bad file descriptor 00:25:43.669 [2024-12-06 14:39:39.392630] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:43.669 [2024-12-06 14:39:43.933650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.933994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.669 [2024-12-06 14:39:43.934272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-12-06 14:39:43.934286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.934968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.934985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.670 [2024-12-06 14:39:43.935032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.670 [2024-12-06 14:39:43.935063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.670 [2024-12-06 14:39:43.935125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.670 [2024-12-06 14:39:43.935428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.670 [2024-12-06 14:39:43.935443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.935474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.935518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.935899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.935929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.935961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.935977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.935992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.671 [2024-12-06 14:39:43.936544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.671 [2024-12-06 14:39:43.936578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.671 [2024-12-06 14:39:43.936595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.936610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.936653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.936685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.936716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.936748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.936779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.936810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.936851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.936882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.936913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.936945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.936969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.936986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.672 [2024-12-06 14:39:43.937756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.672 [2024-12-06 14:39:43.937787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.672 [2024-12-06 14:39:43.937804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.673 [2024-12-06 14:39:43.937818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.937835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.673 [2024-12-06 14:39:43.937859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.937877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.937892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.937908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.673 [2024-12-06 14:39:43.937923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.937940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.937955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.937971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.937985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.938026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.938057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.938088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.938119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.673 [2024-12-06 14:39:43.938150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642c80 is same with the state(5) to be set 00:25:43.673 [2024-12-06 14:39:43.938190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:43.673 [2024-12-06 14:39:43.938202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:43.673 [2024-12-06 14:39:43.938215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84200 len:8 PRP1 0x0 PRP2 0x0 00:25:43.673 [2024-12-06 14:39:43.938229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938292] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1642c80 was disconnected and freed. reset controller. 00:25:43.673 [2024-12-06 14:39:43.938312] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:43.673 [2024-12-06 14:39:43.938384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.673 [2024-12-06 14:39:43.938420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.673 [2024-12-06 14:39:43.938455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.673 [2024-12-06 14:39:43.938485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.673 [2024-12-06 14:39:43.938514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.673 [2024-12-06 14:39:43.938529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.673 [2024-12-06 14:39:43.938564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0440 (9): Bad file descriptor 00:25:43.673 [2024-12-06 14:39:43.940825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.673 [2024-12-06 14:39:43.975080] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:43.673 00:25:43.673 Latency(us) 00:25:43.673 [2024-12-06T14:39:50.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.673 [2024-12-06T14:39:50.643Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:43.673 Verification LBA range: start 0x0 length 0x4000 00:25:43.673 NVMe0n1 : 15.01 13139.65 51.33 297.79 0.00 9508.67 640.47 15728.64 00:25:43.673 [2024-12-06T14:39:50.643Z] =================================================================================================================== 00:25:43.673 [2024-12-06T14:39:50.643Z] Total : 13139.65 51.33 297.79 0.00 9508.67 640.47 15728.64 00:25:43.673 Received shutdown signal, test time was about 15.000000 seconds 00:25:43.673 00:25:43.673 Latency(us) 00:25:43.673 [2024-12-06T14:39:50.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.673 [2024-12-06T14:39:50.643Z] =================================================================================================================== 00:25:43.673 [2024-12-06T14:39:50.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.673 14:39:49 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:43.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:43.673 14:39:49 -- host/failover.sh@65 -- # count=3 00:25:43.673 14:39:49 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:43.673 14:39:49 -- host/failover.sh@73 -- # bdevperf_pid=85624 00:25:43.673 14:39:49 -- host/failover.sh@75 -- # waitforlisten 85624 /var/tmp/bdevperf.sock 00:25:43.673 14:39:49 -- common/autotest_common.sh@829 -- # '[' -z 85624 ']' 00:25:43.673 14:39:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:43.673 14:39:49 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:43.673 14:39:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.673 14:39:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:43.673 14:39:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.673 14:39:49 -- common/autotest_common.sh@10 -- # set +x 00:25:44.240 14:39:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.240 14:39:50 -- common/autotest_common.sh@862 -- # return 0 00:25:44.240 14:39:50 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:44.499 [2024-12-06 14:39:51.251842] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.499 14:39:51 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:44.757 [2024-12-06 14:39:51.480131] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:44.757 14:39:51 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.016 NVMe0n1 00:25:45.016 14:39:51 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.275 00:25:45.275 14:39:52 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.533 00:25:45.533 14:39:52 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:45.533 14:39:52 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:45.792 14:39:52 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.050 14:39:52 -- host/failover.sh@87 -- # sleep 3 00:25:49.334 14:39:55 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:49.334 14:39:55 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:49.334 14:39:56 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.334 14:39:56 -- host/failover.sh@90 -- # run_test_pid=85761 00:25:49.334 14:39:56 -- host/failover.sh@92 -- # wait 85761 00:25:50.709 0 00:25:50.709 14:39:57 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:50.709 [2024-12-06 14:39:49.959951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:50.709 [2024-12-06 14:39:49.960086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85624 ] 00:25:50.709 [2024-12-06 14:39:50.099130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.709 [2024-12-06 14:39:50.212599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.709 [2024-12-06 14:39:52.866944] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:50.709 [2024-12-06 14:39:52.867071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.709 [2024-12-06 14:39:52.867125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.709 [2024-12-06 14:39:52.867143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.709 [2024-12-06 14:39:52.867157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.709 [2024-12-06 14:39:52.867172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.709 [2024-12-06 14:39:52.867185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.709 [2024-12-06 14:39:52.867198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.709 [2024-12-06 14:39:52.867211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.709 [2024-12-06 14:39:52.867225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.709 [2024-12-06 14:39:52.867289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.709 [2024-12-06 14:39:52.867320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a440 (9): Bad file descriptor 00:25:50.709 [2024-12-06 14:39:52.875431] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:50.709 Running I/O for 1 seconds... 00:25:50.709 00:25:50.709 Latency(us) 00:25:50.709 [2024-12-06T14:39:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.709 [2024-12-06T14:39:57.679Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:50.709 Verification LBA range: start 0x0 length 0x4000 00:25:50.709 NVMe0n1 : 1.01 13343.54 52.12 0.00 0.00 9543.19 1750.11 10545.34 00:25:50.709 [2024-12-06T14:39:57.679Z] =================================================================================================================== 00:25:50.709 [2024-12-06T14:39:57.679Z] Total : 13343.54 52.12 0.00 0.00 9543.19 1750.11 10545.34 00:25:50.709 14:39:57 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:50.709 14:39:57 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:50.709 14:39:57 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:50.968 14:39:57 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:50.968 14:39:57 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:51.226 14:39:58 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:51.485 14:39:58 -- host/failover.sh@101 -- # sleep 3 00:25:54.776 14:40:01 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:54.776 14:40:01 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:54.776 14:40:01 -- host/failover.sh@108 -- # killprocess 85624 00:25:54.776 14:40:01 -- common/autotest_common.sh@936 -- # '[' -z 85624 ']' 00:25:54.776 14:40:01 -- common/autotest_common.sh@940 -- # kill -0 85624 00:25:54.776 14:40:01 -- common/autotest_common.sh@941 -- # uname 00:25:54.776 14:40:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.776 14:40:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85624 00:25:54.776 killing process with pid 85624 00:25:54.776 14:40:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:54.776 14:40:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:54.776 14:40:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85624' 00:25:54.776 14:40:01 -- common/autotest_common.sh@955 -- # kill 85624 00:25:54.776 14:40:01 -- common/autotest_common.sh@960 -- # wait 85624 00:25:55.034 14:40:01 -- host/failover.sh@110 -- # sync 00:25:55.294 14:40:02 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.552 14:40:02 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:55.552 14:40:02 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:55.552 14:40:02 -- host/failover.sh@116 -- # nvmftestfini 00:25:55.552 14:40:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:55.552 14:40:02 -- nvmf/common.sh@116 -- # sync 00:25:55.552 14:40:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:55.552 14:40:02 -- nvmf/common.sh@119 -- # set +e 00:25:55.552 14:40:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:55.552 14:40:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:55.552 rmmod nvme_tcp 00:25:55.552 rmmod nvme_fabrics 00:25:55.552 rmmod nvme_keyring 00:25:55.552 14:40:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:55.552 14:40:02 -- nvmf/common.sh@123 -- # set -e 00:25:55.552 14:40:02 -- nvmf/common.sh@124 -- # return 0 00:25:55.552 14:40:02 -- nvmf/common.sh@477 -- # '[' -n 85259 ']' 00:25:55.552 14:40:02 -- nvmf/common.sh@478 -- # killprocess 85259 00:25:55.552 14:40:02 -- common/autotest_common.sh@936 -- # '[' -z 85259 ']' 00:25:55.552 14:40:02 -- common/autotest_common.sh@940 -- # kill -0 85259 00:25:55.552 14:40:02 -- common/autotest_common.sh@941 -- # uname 00:25:55.552 14:40:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:55.552 14:40:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85259 00:25:55.552 killing process with pid 85259 00:25:55.552 14:40:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:55.552 14:40:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:55.552 14:40:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85259' 00:25:55.552 14:40:02 -- common/autotest_common.sh@955 -- # kill 85259 00:25:55.552 14:40:02 -- common/autotest_common.sh@960 -- # wait 85259 00:25:56.118 14:40:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:56.119 14:40:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:56.119 14:40:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:56.119 14:40:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:56.119 14:40:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:56.119 14:40:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.119 14:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.119 14:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.119 14:40:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:56.119 00:25:56.119 real 0m33.507s 00:25:56.119 user 2m9.583s 00:25:56.119 sys 0m5.031s 00:25:56.119 14:40:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:56.119 14:40:02 -- common/autotest_common.sh@10 -- # set +x 00:25:56.119 ************************************ 00:25:56.119 END TEST nvmf_failover 00:25:56.119 ************************************ 00:25:56.119 14:40:02 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:56.119 14:40:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:56.119 14:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:56.119 14:40:02 -- common/autotest_common.sh@10 -- # set +x 00:25:56.119 ************************************ 00:25:56.119 START TEST nvmf_discovery 00:25:56.119 ************************************ 00:25:56.119 14:40:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:56.119 * Looking for test storage... 00:25:56.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:56.119 14:40:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:56.119 14:40:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:56.119 14:40:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:56.119 14:40:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:56.119 14:40:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:56.119 14:40:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:56.119 14:40:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:56.119 14:40:03 -- scripts/common.sh@335 -- # IFS=.-: 00:25:56.119 14:40:03 -- scripts/common.sh@335 -- # read -ra ver1 00:25:56.119 14:40:03 -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.119 14:40:03 -- scripts/common.sh@336 -- # read -ra ver2 00:25:56.119 14:40:03 -- scripts/common.sh@337 -- # local 'op=<' 00:25:56.119 14:40:03 -- scripts/common.sh@339 -- # ver1_l=2 00:25:56.119 14:40:03 -- scripts/common.sh@340 -- # ver2_l=1 00:25:56.119 14:40:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:56.119 14:40:03 -- scripts/common.sh@343 -- # case "$op" in 00:25:56.119 14:40:03 -- scripts/common.sh@344 -- # : 1 00:25:56.119 14:40:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:56.119 14:40:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.119 14:40:03 -- scripts/common.sh@364 -- # decimal 1 00:25:56.119 14:40:03 -- scripts/common.sh@352 -- # local d=1 00:25:56.119 14:40:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.119 14:40:03 -- scripts/common.sh@354 -- # echo 1 00:25:56.119 14:40:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:56.119 14:40:03 -- scripts/common.sh@365 -- # decimal 2 00:25:56.119 14:40:03 -- scripts/common.sh@352 -- # local d=2 00:25:56.119 14:40:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.119 14:40:03 -- scripts/common.sh@354 -- # echo 2 00:25:56.119 14:40:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:56.119 14:40:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:56.119 14:40:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:56.119 14:40:03 -- scripts/common.sh@367 -- # return 0 00:25:56.119 14:40:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.119 14:40:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.119 --rc genhtml_branch_coverage=1 00:25:56.119 --rc genhtml_function_coverage=1 00:25:56.119 --rc genhtml_legend=1 00:25:56.119 --rc geninfo_all_blocks=1 00:25:56.119 --rc geninfo_unexecuted_blocks=1 00:25:56.119 00:25:56.119 ' 00:25:56.119 14:40:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.119 --rc genhtml_branch_coverage=1 00:25:56.119 --rc genhtml_function_coverage=1 00:25:56.119 --rc genhtml_legend=1 00:25:56.119 --rc geninfo_all_blocks=1 00:25:56.119 --rc geninfo_unexecuted_blocks=1 00:25:56.119 00:25:56.119 ' 00:25:56.119 14:40:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.119 --rc genhtml_branch_coverage=1 00:25:56.119 --rc genhtml_function_coverage=1 00:25:56.119 --rc genhtml_legend=1 00:25:56.119 --rc geninfo_all_blocks=1 00:25:56.119 --rc geninfo_unexecuted_blocks=1 00:25:56.119 00:25:56.119 ' 00:25:56.119 14:40:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.119 --rc genhtml_branch_coverage=1 00:25:56.119 --rc genhtml_function_coverage=1 00:25:56.119 --rc genhtml_legend=1 00:25:56.119 --rc geninfo_all_blocks=1 00:25:56.119 --rc geninfo_unexecuted_blocks=1 00:25:56.119 00:25:56.119 ' 00:25:56.119 14:40:03 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:56.119 14:40:03 -- nvmf/common.sh@7 -- # uname -s 00:25:56.119 14:40:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.119 14:40:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.119 14:40:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.119 14:40:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.119 14:40:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.119 14:40:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.119 14:40:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.119 14:40:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.119 14:40:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.119 14:40:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.378 14:40:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:25:56.378 14:40:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:25:56.378 14:40:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.378 14:40:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.378 14:40:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:56.378 14:40:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:56.378 14:40:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.378 14:40:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.378 14:40:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.378 14:40:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.378 14:40:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.378 14:40:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.378 14:40:03 -- paths/export.sh@5 -- # export PATH 00:25:56.378 14:40:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.378 14:40:03 -- nvmf/common.sh@46 -- # : 0 00:25:56.378 14:40:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:56.378 14:40:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:56.378 14:40:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:56.378 14:40:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.378 14:40:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.378 14:40:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:56.378 14:40:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:56.378 14:40:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:56.378 14:40:03 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:56.378 14:40:03 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:56.378 14:40:03 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:56.378 14:40:03 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:56.378 14:40:03 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:56.378 14:40:03 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:56.378 14:40:03 -- host/discovery.sh@25 -- # nvmftestinit 00:25:56.378 14:40:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:56.378 14:40:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.378 14:40:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:56.378 14:40:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:56.378 14:40:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:56.378 14:40:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.378 14:40:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.378 14:40:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.378 14:40:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:56.378 14:40:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:56.378 14:40:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:56.378 14:40:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:56.378 14:40:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:56.378 14:40:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:56.378 14:40:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.378 14:40:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.378 14:40:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:56.378 14:40:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:56.378 14:40:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:56.378 14:40:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:56.378 14:40:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:56.378 14:40:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.378 14:40:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:56.378 14:40:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:56.378 14:40:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:56.378 14:40:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:56.378 14:40:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:56.378 14:40:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:56.378 Cannot find device "nvmf_tgt_br" 00:25:56.378 14:40:03 -- nvmf/common.sh@154 -- # true 00:25:56.378 14:40:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:56.378 Cannot find device "nvmf_tgt_br2" 00:25:56.378 14:40:03 -- nvmf/common.sh@155 -- # true 00:25:56.378 14:40:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:56.378 14:40:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:56.378 Cannot find device "nvmf_tgt_br" 00:25:56.378 14:40:03 -- nvmf/common.sh@157 -- # true 00:25:56.378 14:40:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:56.378 Cannot find device "nvmf_tgt_br2" 00:25:56.378 14:40:03 -- nvmf/common.sh@158 -- # true 00:25:56.378 14:40:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:56.378 14:40:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:56.378 14:40:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:56.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:56.378 14:40:03 -- nvmf/common.sh@161 -- # true 00:25:56.379 14:40:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:56.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:56.379 14:40:03 -- nvmf/common.sh@162 -- # true 00:25:56.379 14:40:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:56.379 14:40:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:56.379 14:40:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:56.379 14:40:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:56.379 14:40:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:56.379 14:40:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:56.379 14:40:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:56.379 14:40:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:56.379 14:40:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:56.379 14:40:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:56.379 14:40:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:56.379 14:40:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:56.379 14:40:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:56.379 14:40:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:56.379 14:40:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:56.379 14:40:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:56.379 14:40:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:56.379 14:40:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:56.637 14:40:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:56.637 14:40:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:56.637 14:40:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:56.637 14:40:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:56.637 14:40:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:56.637 14:40:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:56.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:25:56.637 00:25:56.637 --- 10.0.0.2 ping statistics --- 00:25:56.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.637 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:56.637 14:40:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:56.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:56.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:25:56.637 00:25:56.637 --- 10.0.0.3 ping statistics --- 00:25:56.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.637 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:56.637 14:40:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:56.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:25:56.637 00:25:56.637 --- 10.0.0.1 ping statistics --- 00:25:56.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.637 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:56.637 14:40:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.637 14:40:03 -- nvmf/common.sh@421 -- # return 0 00:25:56.637 14:40:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:56.637 14:40:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.637 14:40:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:56.637 14:40:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:56.637 14:40:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.637 14:40:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:56.637 14:40:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:56.637 14:40:03 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:56.637 14:40:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:56.637 14:40:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.637 14:40:03 -- common/autotest_common.sh@10 -- # set +x 00:25:56.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.637 14:40:03 -- nvmf/common.sh@469 -- # nvmfpid=86079 00:25:56.637 14:40:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:56.637 14:40:03 -- nvmf/common.sh@470 -- # waitforlisten 86079 00:25:56.637 14:40:03 -- common/autotest_common.sh@829 -- # '[' -z 86079 ']' 00:25:56.637 14:40:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.637 14:40:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.637 14:40:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.637 14:40:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.637 14:40:03 -- common/autotest_common.sh@10 -- # set +x 00:25:56.637 [2024-12-06 14:40:03.519126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:56.637 [2024-12-06 14:40:03.519462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.895 [2024-12-06 14:40:03.661833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.895 [2024-12-06 14:40:03.802196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:56.895 [2024-12-06 14:40:03.802386] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.896 [2024-12-06 14:40:03.802403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.896 [2024-12-06 14:40:03.802442] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.896 [2024-12-06 14:40:03.802483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.831 14:40:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.831 14:40:04 -- common/autotest_common.sh@862 -- # return 0 00:25:57.831 14:40:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:57.831 14:40:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 14:40:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.831 14:40:04 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.831 14:40:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 [2024-12-06 14:40:04.626275] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.831 14:40:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.831 14:40:04 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:57.831 14:40:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 [2024-12-06 14:40:04.634504] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:57.831 14:40:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.831 14:40:04 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:57.831 14:40:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 null0 00:25:57.831 14:40:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.831 14:40:04 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:57.831 14:40:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 null1 00:25:57.831 14:40:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.831 14:40:04 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:57.831 14:40:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:57.831 14:40:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.831 14:40:04 -- host/discovery.sh@45 -- # hostpid=86129 00:25:57.831 14:40:04 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:57.831 14:40:04 -- host/discovery.sh@46 -- # waitforlisten 86129 /tmp/host.sock 00:25:57.831 14:40:04 -- common/autotest_common.sh@829 -- # '[' -z 86129 ']' 00:25:57.831 14:40:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:57.831 14:40:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.831 14:40:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:57.831 14:40:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.831 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 [2024-12-06 14:40:04.730904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.831 [2024-12-06 14:40:04.731248] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86129 ] 00:25:58.090 [2024-12-06 14:40:04.872861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.090 [2024-12-06 14:40:04.987193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:58.090 [2024-12-06 14:40:04.987628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.024 14:40:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.024 14:40:05 -- common/autotest_common.sh@862 -- # return 0 00:25:59.024 14:40:05 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.024 14:40:05 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@72 -- # notify_id=0 00:25:59.024 14:40:05 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # sort 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # xargs 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:59.024 14:40:05 -- host/discovery.sh@79 -- # get_bdev_list 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # sort 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # xargs 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:59.024 14:40:05 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # sort 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # xargs 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:59.024 14:40:05 -- host/discovery.sh@83 -- # get_bdev_list 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # sort 00:25:59.024 14:40:05 -- host/discovery.sh@55 -- # xargs 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:59.024 14:40:05 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.024 14:40:05 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.024 14:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.024 14:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # xargs 00:25:59.024 14:40:05 -- host/discovery.sh@59 -- # sort 00:25:59.024 14:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.282 14:40:06 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:59.282 14:40:06 -- host/discovery.sh@87 -- # get_bdev_list 00:25:59.282 14:40:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.282 14:40:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.282 14:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.282 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.282 14:40:06 -- host/discovery.sh@55 -- # xargs 00:25:59.282 14:40:06 -- host/discovery.sh@55 -- # sort 00:25:59.282 14:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.282 14:40:06 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:59.282 14:40:06 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.282 14:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.282 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.282 [2024-12-06 14:40:06.074883] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.282 14:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.282 14:40:06 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:59.283 14:40:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.283 14:40:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.283 14:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.283 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.283 14:40:06 -- host/discovery.sh@59 -- # xargs 00:25:59.283 14:40:06 -- host/discovery.sh@59 -- # sort 00:25:59.283 14:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.283 14:40:06 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:59.283 14:40:06 -- host/discovery.sh@93 -- # get_bdev_list 00:25:59.283 14:40:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.283 14:40:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.283 14:40:06 -- host/discovery.sh@55 -- # xargs 00:25:59.283 14:40:06 -- host/discovery.sh@55 -- # sort 00:25:59.283 14:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.283 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.283 14:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.283 14:40:06 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:59.283 14:40:06 -- host/discovery.sh@94 -- # get_notification_count 00:25:59.283 14:40:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:59.283 14:40:06 -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.283 14:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.283 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.283 14:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.283 14:40:06 -- host/discovery.sh@74 -- # notification_count=0 00:25:59.283 14:40:06 -- host/discovery.sh@75 -- # notify_id=0 00:25:59.283 14:40:06 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:59.283 14:40:06 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:59.283 14:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.283 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.283 14:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.283 14:40:06 -- host/discovery.sh@100 -- # sleep 1 00:25:59.847 [2024-12-06 14:40:06.749539] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.847 [2024-12-06 14:40:06.749601] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.848 [2024-12-06 14:40:06.749622] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.106 [2024-12-06 14:40:06.835717] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:00.106 [2024-12-06 14:40:06.892131] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:00.106 [2024-12-06 14:40:06.892180] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:00.363 14:40:07 -- host/discovery.sh@101 -- # get_subsystem_names 00:26:00.363 14:40:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.363 14:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.363 14:40:07 -- host/discovery.sh@59 -- # xargs 00:26:00.363 14:40:07 -- host/discovery.sh@59 -- # sort 00:26:00.363 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:00.363 14:40:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.363 14:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.363 14:40:07 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.363 14:40:07 -- host/discovery.sh@102 -- # get_bdev_list 00:26:00.363 14:40:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.363 14:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.363 14:40:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.363 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:00.363 14:40:07 -- host/discovery.sh@55 -- # sort 00:26:00.363 14:40:07 -- host/discovery.sh@55 -- # xargs 00:26:00.363 14:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:26:00.621 14:40:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:00.621 14:40:07 -- host/discovery.sh@63 -- # sort -n 00:26:00.621 14:40:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:00.621 14:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.621 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:00.621 14:40:07 -- host/discovery.sh@63 -- # xargs 00:26:00.621 14:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@104 -- # get_notification_count 00:26:00.621 14:40:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:00.621 14:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.621 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:00.621 14:40:07 -- host/discovery.sh@74 -- # jq '. | length' 00:26:00.621 14:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@74 -- # notification_count=1 00:26:00.621 14:40:07 -- host/discovery.sh@75 -- # notify_id=1 00:26:00.621 14:40:07 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:00.621 14:40:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.621 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:00.621 14:40:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.621 14:40:07 -- host/discovery.sh@109 -- # sleep 1 00:26:01.624 14:40:08 -- host/discovery.sh@110 -- # get_bdev_list 00:26:01.624 14:40:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.624 14:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.624 14:40:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.624 14:40:08 -- common/autotest_common.sh@10 -- # set +x 00:26:01.624 14:40:08 -- host/discovery.sh@55 -- # xargs 00:26:01.624 14:40:08 -- host/discovery.sh@55 -- # sort 00:26:01.624 14:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.624 14:40:08 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.624 14:40:08 -- host/discovery.sh@111 -- # get_notification_count 00:26:01.624 14:40:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:01.624 14:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.625 14:40:08 -- common/autotest_common.sh@10 -- # set +x 00:26:01.625 14:40:08 -- host/discovery.sh@74 -- # jq '. | length' 00:26:01.625 14:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.883 14:40:08 -- host/discovery.sh@74 -- # notification_count=1 00:26:01.883 14:40:08 -- host/discovery.sh@75 -- # notify_id=2 00:26:01.883 14:40:08 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:26:01.883 14:40:08 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:01.883 14:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.883 14:40:08 -- common/autotest_common.sh@10 -- # set +x 00:26:01.883 [2024-12-06 14:40:08.600292] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.883 [2024-12-06 14:40:08.601447] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:01.883 [2024-12-06 14:40:08.601491] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.883 14:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.883 14:40:08 -- host/discovery.sh@117 -- # sleep 1 00:26:01.883 [2024-12-06 14:40:08.687606] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:01.883 [2024-12-06 14:40:08.751965] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.883 [2024-12-06 14:40:08.751996] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:01.883 [2024-12-06 14:40:08.752004] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:02.818 14:40:09 -- host/discovery.sh@118 -- # get_subsystem_names 00:26:02.818 14:40:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.818 14:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.818 14:40:09 -- common/autotest_common.sh@10 -- # set +x 00:26:02.818 14:40:09 -- host/discovery.sh@59 -- # sort 00:26:02.818 14:40:09 -- host/discovery.sh@59 -- # xargs 00:26:02.818 14:40:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.818 14:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.818 14:40:09 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.818 14:40:09 -- host/discovery.sh@119 -- # get_bdev_list 00:26:02.818 14:40:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.818 14:40:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.818 14:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.818 14:40:09 -- common/autotest_common.sh@10 -- # set +x 00:26:02.818 14:40:09 -- host/discovery.sh@55 -- # sort 00:26:02.818 14:40:09 -- host/discovery.sh@55 -- # xargs 00:26:02.818 14:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.818 14:40:09 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:02.818 14:40:09 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:26:02.818 14:40:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:02.818 14:40:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:02.818 14:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.818 14:40:09 -- common/autotest_common.sh@10 -- # set +x 00:26:02.818 14:40:09 -- host/discovery.sh@63 -- # sort -n 00:26:02.818 14:40:09 -- host/discovery.sh@63 -- # xargs 00:26:02.818 14:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.818 14:40:09 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:02.818 14:40:09 -- host/discovery.sh@121 -- # get_notification_count 00:26:02.818 14:40:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:02.818 14:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.818 14:40:09 -- common/autotest_common.sh@10 -- # set +x 00:26:02.818 14:40:09 -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.077 14:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.077 14:40:09 -- host/discovery.sh@74 -- # notification_count=0 00:26:03.077 14:40:09 -- host/discovery.sh@75 -- # notify_id=2 00:26:03.077 14:40:09 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:26:03.078 14:40:09 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.078 14:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.078 14:40:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.078 [2024-12-06 14:40:09.825110] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:03.078 [2024-12-06 14:40:09.825152] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.078 [2024-12-06 14:40:09.829460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.078 [2024-12-06 14:40:09.829498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.078 [2024-12-06 14:40:09.829512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.078 [2024-12-06 14:40:09.829522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.078 [2024-12-06 14:40:09.829534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.078 [2024-12-06 14:40:09.829544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.078 [2024-12-06 14:40:09.829554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.078 [2024-12-06 14:40:09.829563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.078 [2024-12-06 14:40:09.829572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 14:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.078 14:40:09 -- host/discovery.sh@127 -- # sleep 1 00:26:03.078 [2024-12-06 14:40:09.839391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.849419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.849546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.849598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.849615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.849626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.849644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.849670] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.849686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.849709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.849725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.859479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.859601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.859645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.859660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.859670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.859685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.859713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.859721] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.859730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.859744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.869571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.869710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.869759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.869776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.869786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.869803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.869817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.869825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.869835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.869850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.879657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.879789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.879844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.879860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.879885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.879900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.879913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.879922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.879930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.879961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.889742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.889841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.889886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.889902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.889913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.889928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.889942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.889950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.889959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.889973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.899812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.899893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.899937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.899954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.899963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.899979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.899992] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.900001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.900010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.900024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.909864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.078 [2024-12-06 14:40:09.909947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.909992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.078 [2024-12-06 14:40:09.910008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffa9c0 with addr=10.0.0.2, port=4420 00:26:03.078 [2024-12-06 14:40:09.910018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffa9c0 is same with the state(5) to be set 00:26:03.078 [2024-12-06 14:40:09.910035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffa9c0 (9): Bad file descriptor 00:26:03.078 [2024-12-06 14:40:09.910048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.078 [2024-12-06 14:40:09.910057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.078 [2024-12-06 14:40:09.910065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.078 [2024-12-06 14:40:09.910079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.078 [2024-12-06 14:40:09.911157] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:03.078 [2024-12-06 14:40:09.911202] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:04.014 14:40:10 -- host/discovery.sh@128 -- # get_subsystem_names 00:26:04.014 14:40:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.014 14:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.014 14:40:10 -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 14:40:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.014 14:40:10 -- host/discovery.sh@59 -- # sort 00:26:04.014 14:40:10 -- host/discovery.sh@59 -- # xargs 00:26:04.014 14:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.014 14:40:10 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.014 14:40:10 -- host/discovery.sh@129 -- # get_bdev_list 00:26:04.014 14:40:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.014 14:40:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.014 14:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.014 14:40:10 -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 14:40:10 -- host/discovery.sh@55 -- # sort 00:26:04.014 14:40:10 -- host/discovery.sh@55 -- # xargs 00:26:04.014 14:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.014 14:40:10 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.014 14:40:10 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:26:04.014 14:40:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.014 14:40:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.014 14:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.014 14:40:10 -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 14:40:10 -- host/discovery.sh@63 -- # xargs 00:26:04.014 14:40:10 -- host/discovery.sh@63 -- # sort -n 00:26:04.014 14:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.272 14:40:10 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:26:04.272 14:40:10 -- host/discovery.sh@131 -- # get_notification_count 00:26:04.272 14:40:11 -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.272 14:40:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:04.272 14:40:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.272 14:40:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.272 14:40:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.272 14:40:11 -- host/discovery.sh@74 -- # notification_count=0 00:26:04.272 14:40:11 -- host/discovery.sh@75 -- # notify_id=2 00:26:04.272 14:40:11 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:26:04.272 14:40:11 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:04.272 14:40:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.272 14:40:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.272 14:40:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.272 14:40:11 -- host/discovery.sh@135 -- # sleep 1 00:26:05.207 14:40:12 -- host/discovery.sh@136 -- # get_subsystem_names 00:26:05.207 14:40:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:05.207 14:40:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:05.207 14:40:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.207 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:05.207 14:40:12 -- host/discovery.sh@59 -- # sort 00:26:05.207 14:40:12 -- host/discovery.sh@59 -- # xargs 00:26:05.207 14:40:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.207 14:40:12 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:26:05.207 14:40:12 -- host/discovery.sh@137 -- # get_bdev_list 00:26:05.207 14:40:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.207 14:40:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.207 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:05.207 14:40:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.207 14:40:12 -- host/discovery.sh@55 -- # sort 00:26:05.207 14:40:12 -- host/discovery.sh@55 -- # xargs 00:26:05.207 14:40:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.466 14:40:12 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:26:05.466 14:40:12 -- host/discovery.sh@138 -- # get_notification_count 00:26:05.466 14:40:12 -- host/discovery.sh@74 -- # jq '. | length' 00:26:05.466 14:40:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:05.466 14:40:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.466 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:05.466 14:40:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.466 14:40:12 -- host/discovery.sh@74 -- # notification_count=2 00:26:05.466 14:40:12 -- host/discovery.sh@75 -- # notify_id=4 00:26:05.466 14:40:12 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:26:05.466 14:40:12 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.466 14:40:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.466 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.402 [2024-12-06 14:40:13.260436] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:06.402 [2024-12-06 14:40:13.260462] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:06.402 [2024-12-06 14:40:13.260481] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:06.402 [2024-12-06 14:40:13.346758] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:06.661 [2024-12-06 14:40:13.406285] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:06.661 [2024-12-06 14:40:13.406546] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:06.661 14:40:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.661 14:40:13 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.661 14:40:13 -- common/autotest_common.sh@650 -- # local es=0 00:26:06.661 14:40:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.661 14:40:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:06.662 14:40:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.662 14:40:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:06.662 14:40:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.662 14:40:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.662 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.662 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.662 2024/12/06 14:40:13 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:06.662 request: 00:26:06.662 { 00:26:06.662 "method": "bdev_nvme_start_discovery", 00:26:06.662 "params": { 00:26:06.662 "name": "nvme", 00:26:06.662 "trtype": "tcp", 00:26:06.662 "traddr": "10.0.0.2", 00:26:06.662 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:06.662 "adrfam": "ipv4", 00:26:06.662 "trsvcid": "8009", 00:26:06.662 "wait_for_attach": true 00:26:06.662 } 00:26:06.662 } 00:26:06.662 Got JSON-RPC error response 00:26:06.662 GoRPCClient: error on JSON-RPC call 00:26:06.662 14:40:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:06.662 14:40:13 -- common/autotest_common.sh@653 -- # es=1 00:26:06.662 14:40:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:06.662 14:40:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:06.662 14:40:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:06.662 14:40:13 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # sort 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # xargs 00:26:06.662 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.662 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.662 14:40:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.662 14:40:13 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:26:06.662 14:40:13 -- host/discovery.sh@147 -- # get_bdev_list 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.662 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.662 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # sort 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # xargs 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.662 14:40:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.662 14:40:13 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:06.662 14:40:13 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.662 14:40:13 -- common/autotest_common.sh@650 -- # local es=0 00:26:06.662 14:40:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.662 14:40:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:06.662 14:40:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.662 14:40:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:06.662 14:40:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.662 14:40:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.662 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.662 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.662 2024/12/06 14:40:13 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:06.662 request: 00:26:06.662 { 00:26:06.662 "method": "bdev_nvme_start_discovery", 00:26:06.662 "params": { 00:26:06.662 "name": "nvme_second", 00:26:06.662 "trtype": "tcp", 00:26:06.662 "traddr": "10.0.0.2", 00:26:06.662 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:06.662 "adrfam": "ipv4", 00:26:06.662 "trsvcid": "8009", 00:26:06.662 "wait_for_attach": true 00:26:06.662 } 00:26:06.662 } 00:26:06.662 Got JSON-RPC error response 00:26:06.662 GoRPCClient: error on JSON-RPC call 00:26:06.662 14:40:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:06.662 14:40:13 -- common/autotest_common.sh@653 -- # es=1 00:26:06.662 14:40:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:06.662 14:40:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:06.662 14:40:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:06.662 14:40:13 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:06.662 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # sort 00:26:06.662 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.662 14:40:13 -- host/discovery.sh@67 -- # xargs 00:26:06.662 14:40:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.662 14:40:13 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:26:06.662 14:40:13 -- host/discovery.sh@153 -- # get_bdev_list 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.662 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # xargs 00:26:06.662 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.662 14:40:13 -- host/discovery.sh@55 -- # sort 00:26:06.921 14:40:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.921 14:40:13 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:06.921 14:40:13 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:06.921 14:40:13 -- common/autotest_common.sh@650 -- # local es=0 00:26:06.921 14:40:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:06.921 14:40:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:06.921 14:40:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.921 14:40:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:06.921 14:40:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.921 14:40:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:06.921 14:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.921 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:07.857 [2024-12-06 14:40:14.695807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.857 [2024-12-06 14:40:14.695926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.857 [2024-12-06 14:40:14.695945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6970 with addr=10.0.0.2, port=8010 00:26:07.857 [2024-12-06 14:40:14.695977] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:07.857 [2024-12-06 14:40:14.695987] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:07.857 [2024-12-06 14:40:14.695995] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:08.790 [2024-12-06 14:40:15.695807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.790 [2024-12-06 14:40:15.695913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.790 [2024-12-06 14:40:15.695932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6970 with addr=10.0.0.2, port=8010 00:26:08.790 [2024-12-06 14:40:15.695956] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:08.790 [2024-12-06 14:40:15.695966] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:08.790 [2024-12-06 14:40:15.695976] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:10.167 [2024-12-06 14:40:16.695646] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:10.167 2024/12/06 14:40:16 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:26:10.167 request: 00:26:10.167 { 00:26:10.167 "method": "bdev_nvme_start_discovery", 00:26:10.167 "params": { 00:26:10.167 "name": "nvme_second", 00:26:10.167 "trtype": "tcp", 00:26:10.167 "traddr": "10.0.0.2", 00:26:10.167 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:10.167 "adrfam": "ipv4", 00:26:10.167 "trsvcid": "8010", 00:26:10.167 "attach_timeout_ms": 3000 00:26:10.167 } 00:26:10.167 } 00:26:10.167 Got JSON-RPC error response 00:26:10.167 GoRPCClient: error on JSON-RPC call 00:26:10.167 14:40:16 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:10.167 14:40:16 -- common/autotest_common.sh@653 -- # es=1 00:26:10.167 14:40:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:10.167 14:40:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:10.167 14:40:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:10.167 14:40:16 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:26:10.167 14:40:16 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:10.167 14:40:16 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:10.167 14:40:16 -- host/discovery.sh@67 -- # sort 00:26:10.167 14:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.167 14:40:16 -- host/discovery.sh@67 -- # xargs 00:26:10.167 14:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:10.167 14:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.167 14:40:16 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:26:10.167 14:40:16 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:26:10.167 14:40:16 -- host/discovery.sh@162 -- # kill 86129 00:26:10.167 14:40:16 -- host/discovery.sh@163 -- # nvmftestfini 00:26:10.167 14:40:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:10.167 14:40:16 -- nvmf/common.sh@116 -- # sync 00:26:10.167 14:40:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:10.167 14:40:16 -- nvmf/common.sh@119 -- # set +e 00:26:10.167 14:40:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:10.167 14:40:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:10.167 rmmod nvme_tcp 00:26:10.167 rmmod nvme_fabrics 00:26:10.167 rmmod nvme_keyring 00:26:10.167 14:40:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:10.167 14:40:16 -- nvmf/common.sh@123 -- # set -e 00:26:10.167 14:40:16 -- nvmf/common.sh@124 -- # return 0 00:26:10.167 14:40:16 -- nvmf/common.sh@477 -- # '[' -n 86079 ']' 00:26:10.167 14:40:16 -- nvmf/common.sh@478 -- # killprocess 86079 00:26:10.167 14:40:16 -- common/autotest_common.sh@936 -- # '[' -z 86079 ']' 00:26:10.167 14:40:16 -- common/autotest_common.sh@940 -- # kill -0 86079 00:26:10.167 14:40:16 -- common/autotest_common.sh@941 -- # uname 00:26:10.167 14:40:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:10.167 14:40:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86079 00:26:10.167 killing process with pid 86079 00:26:10.167 14:40:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:10.167 14:40:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:10.167 14:40:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86079' 00:26:10.168 14:40:16 -- common/autotest_common.sh@955 -- # kill 86079 00:26:10.168 14:40:16 -- common/autotest_common.sh@960 -- # wait 86079 00:26:10.426 14:40:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:10.426 14:40:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:10.426 14:40:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:10.426 14:40:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.426 14:40:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:10.426 14:40:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.426 14:40:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.426 14:40:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.426 14:40:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:10.426 00:26:10.426 real 0m14.366s 00:26:10.426 user 0m27.918s 00:26:10.426 sys 0m1.806s 00:26:10.426 14:40:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:10.426 ************************************ 00:26:10.427 END TEST nvmf_discovery 00:26:10.427 14:40:17 -- common/autotest_common.sh@10 -- # set +x 00:26:10.427 ************************************ 00:26:10.427 14:40:17 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:10.427 14:40:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:10.427 14:40:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:10.427 14:40:17 -- common/autotest_common.sh@10 -- # set +x 00:26:10.427 ************************************ 00:26:10.427 START TEST nvmf_discovery_remove_ifc 00:26:10.427 ************************************ 00:26:10.427 14:40:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:10.686 * Looking for test storage... 00:26:10.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:10.686 14:40:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:10.686 14:40:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:10.686 14:40:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:10.686 14:40:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:10.686 14:40:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:10.686 14:40:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:10.686 14:40:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:10.686 14:40:17 -- scripts/common.sh@335 -- # IFS=.-: 00:26:10.686 14:40:17 -- scripts/common.sh@335 -- # read -ra ver1 00:26:10.686 14:40:17 -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.686 14:40:17 -- scripts/common.sh@336 -- # read -ra ver2 00:26:10.686 14:40:17 -- scripts/common.sh@337 -- # local 'op=<' 00:26:10.686 14:40:17 -- scripts/common.sh@339 -- # ver1_l=2 00:26:10.686 14:40:17 -- scripts/common.sh@340 -- # ver2_l=1 00:26:10.686 14:40:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:10.686 14:40:17 -- scripts/common.sh@343 -- # case "$op" in 00:26:10.686 14:40:17 -- scripts/common.sh@344 -- # : 1 00:26:10.686 14:40:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:10.686 14:40:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.686 14:40:17 -- scripts/common.sh@364 -- # decimal 1 00:26:10.686 14:40:17 -- scripts/common.sh@352 -- # local d=1 00:26:10.686 14:40:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.686 14:40:17 -- scripts/common.sh@354 -- # echo 1 00:26:10.686 14:40:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:10.686 14:40:17 -- scripts/common.sh@365 -- # decimal 2 00:26:10.686 14:40:17 -- scripts/common.sh@352 -- # local d=2 00:26:10.686 14:40:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.686 14:40:17 -- scripts/common.sh@354 -- # echo 2 00:26:10.686 14:40:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:10.686 14:40:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:10.686 14:40:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:10.686 14:40:17 -- scripts/common.sh@367 -- # return 0 00:26:10.686 14:40:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.686 14:40:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:10.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.686 --rc genhtml_branch_coverage=1 00:26:10.686 --rc genhtml_function_coverage=1 00:26:10.686 --rc genhtml_legend=1 00:26:10.686 --rc geninfo_all_blocks=1 00:26:10.686 --rc geninfo_unexecuted_blocks=1 00:26:10.686 00:26:10.686 ' 00:26:10.686 14:40:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:10.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.686 --rc genhtml_branch_coverage=1 00:26:10.686 --rc genhtml_function_coverage=1 00:26:10.686 --rc genhtml_legend=1 00:26:10.686 --rc geninfo_all_blocks=1 00:26:10.686 --rc geninfo_unexecuted_blocks=1 00:26:10.686 00:26:10.686 ' 00:26:10.686 14:40:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:10.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.686 --rc genhtml_branch_coverage=1 00:26:10.686 --rc genhtml_function_coverage=1 00:26:10.686 --rc genhtml_legend=1 00:26:10.686 --rc geninfo_all_blocks=1 00:26:10.686 --rc geninfo_unexecuted_blocks=1 00:26:10.686 00:26:10.686 ' 00:26:10.686 14:40:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:10.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.686 --rc genhtml_branch_coverage=1 00:26:10.686 --rc genhtml_function_coverage=1 00:26:10.686 --rc genhtml_legend=1 00:26:10.686 --rc geninfo_all_blocks=1 00:26:10.686 --rc geninfo_unexecuted_blocks=1 00:26:10.686 00:26:10.686 ' 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.686 14:40:17 -- nvmf/common.sh@7 -- # uname -s 00:26:10.686 14:40:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.686 14:40:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.686 14:40:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.686 14:40:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.686 14:40:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.686 14:40:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.686 14:40:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.686 14:40:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.686 14:40:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.686 14:40:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.686 14:40:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:26:10.686 14:40:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:26:10.686 14:40:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.686 14:40:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.686 14:40:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.686 14:40:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.686 14:40:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.686 14:40:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.686 14:40:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.686 14:40:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.686 14:40:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.686 14:40:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.686 14:40:17 -- paths/export.sh@5 -- # export PATH 00:26:10.686 14:40:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.686 14:40:17 -- nvmf/common.sh@46 -- # : 0 00:26:10.686 14:40:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:10.686 14:40:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:10.686 14:40:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:10.686 14:40:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.686 14:40:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.686 14:40:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:10.686 14:40:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:10.686 14:40:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:10.686 14:40:17 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:10.686 14:40:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:10.686 14:40:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.686 14:40:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:10.686 14:40:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:10.686 14:40:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:10.686 14:40:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.686 14:40:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.686 14:40:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.686 14:40:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:10.686 14:40:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:10.686 14:40:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:10.686 14:40:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:10.686 14:40:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:10.686 14:40:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:10.686 14:40:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.686 14:40:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.686 14:40:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:10.686 14:40:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:10.686 14:40:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:10.686 14:40:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:10.686 14:40:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:10.686 14:40:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.686 14:40:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:10.686 14:40:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:10.686 14:40:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:10.686 14:40:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:10.686 14:40:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:10.686 14:40:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:10.686 Cannot find device "nvmf_tgt_br" 00:26:10.686 14:40:17 -- nvmf/common.sh@154 -- # true 00:26:10.686 14:40:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.686 Cannot find device "nvmf_tgt_br2" 00:26:10.686 14:40:17 -- nvmf/common.sh@155 -- # true 00:26:10.686 14:40:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:10.686 14:40:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:10.686 Cannot find device "nvmf_tgt_br" 00:26:10.686 14:40:17 -- nvmf/common.sh@157 -- # true 00:26:10.686 14:40:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:10.686 Cannot find device "nvmf_tgt_br2" 00:26:10.686 14:40:17 -- nvmf/common.sh@158 -- # true 00:26:10.686 14:40:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:10.686 14:40:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:10.945 14:40:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.945 14:40:17 -- nvmf/common.sh@161 -- # true 00:26:10.945 14:40:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.945 14:40:17 -- nvmf/common.sh@162 -- # true 00:26:10.945 14:40:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:10.945 14:40:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:10.945 14:40:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:10.945 14:40:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:10.945 14:40:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:10.945 14:40:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:10.945 14:40:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:10.945 14:40:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:10.945 14:40:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:10.945 14:40:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:10.945 14:40:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:10.945 14:40:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:10.945 14:40:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:10.945 14:40:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:10.945 14:40:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:10.945 14:40:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:10.945 14:40:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:10.945 14:40:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:10.945 14:40:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:10.945 14:40:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:10.945 14:40:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:10.945 14:40:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:10.945 14:40:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:10.945 14:40:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:10.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:26:10.945 00:26:10.945 --- 10.0.0.2 ping statistics --- 00:26:10.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.945 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:10.945 14:40:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:10.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:10.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:26:10.945 00:26:10.945 --- 10.0.0.3 ping statistics --- 00:26:10.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.945 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:26:10.945 14:40:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:10.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:10.945 00:26:10.945 --- 10.0.0.1 ping statistics --- 00:26:10.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.945 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:10.945 14:40:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.945 14:40:17 -- nvmf/common.sh@421 -- # return 0 00:26:10.945 14:40:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:10.945 14:40:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.945 14:40:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:10.945 14:40:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:10.945 14:40:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.945 14:40:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:10.945 14:40:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:10.945 14:40:17 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:10.945 14:40:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:10.945 14:40:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:10.945 14:40:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.204 14:40:17 -- nvmf/common.sh@469 -- # nvmfpid=86647 00:26:11.204 14:40:17 -- nvmf/common.sh@470 -- # waitforlisten 86647 00:26:11.204 14:40:17 -- common/autotest_common.sh@829 -- # '[' -z 86647 ']' 00:26:11.204 14:40:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:11.204 14:40:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.204 14:40:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.204 14:40:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.204 14:40:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.204 14:40:17 -- common/autotest_common.sh@10 -- # set +x 00:26:11.204 [2024-12-06 14:40:17.964827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:11.204 [2024-12-06 14:40:17.964898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.204 [2024-12-06 14:40:18.095725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.463 [2024-12-06 14:40:18.211825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.463 [2024-12-06 14:40:18.211974] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.463 [2024-12-06 14:40:18.211987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.463 [2024-12-06 14:40:18.211996] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.463 [2024-12-06 14:40:18.212031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.030 14:40:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.030 14:40:18 -- common/autotest_common.sh@862 -- # return 0 00:26:12.030 14:40:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:12.030 14:40:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.030 14:40:18 -- common/autotest_common.sh@10 -- # set +x 00:26:12.030 14:40:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.030 14:40:18 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:12.030 14:40:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.030 14:40:18 -- common/autotest_common.sh@10 -- # set +x 00:26:12.030 [2024-12-06 14:40:18.971491] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.030 [2024-12-06 14:40:18.979634] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:12.030 null0 00:26:12.289 [2024-12-06 14:40:19.011569] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.289 14:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.289 14:40:19 -- host/discovery_remove_ifc.sh@59 -- # hostpid=86696 00:26:12.289 14:40:19 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:12.289 14:40:19 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86696 /tmp/host.sock 00:26:12.289 14:40:19 -- common/autotest_common.sh@829 -- # '[' -z 86696 ']' 00:26:12.289 14:40:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:12.289 14:40:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.289 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:12.289 14:40:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:12.289 14:40:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.289 14:40:19 -- common/autotest_common.sh@10 -- # set +x 00:26:12.289 [2024-12-06 14:40:19.085348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:12.289 [2024-12-06 14:40:19.085460] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86696 ] 00:26:12.289 [2024-12-06 14:40:19.223018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.547 [2024-12-06 14:40:19.335130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:12.547 [2024-12-06 14:40:19.335346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.113 14:40:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.113 14:40:20 -- common/autotest_common.sh@862 -- # return 0 00:26:13.114 14:40:20 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.114 14:40:20 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:13.114 14:40:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.114 14:40:20 -- common/autotest_common.sh@10 -- # set +x 00:26:13.114 14:40:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.114 14:40:20 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:13.114 14:40:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.114 14:40:20 -- common/autotest_common.sh@10 -- # set +x 00:26:13.372 14:40:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.372 14:40:20 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:13.372 14:40:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.372 14:40:20 -- common/autotest_common.sh@10 -- # set +x 00:26:14.309 [2024-12-06 14:40:21.165809] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:14.309 [2024-12-06 14:40:21.165879] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:14.309 [2024-12-06 14:40:21.165905] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:14.309 [2024-12-06 14:40:21.251968] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:14.568 [2024-12-06 14:40:21.308957] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:14.568 [2024-12-06 14:40:21.309028] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:14.568 [2024-12-06 14:40:21.309077] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:14.568 [2024-12-06 14:40:21.309099] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:14.568 [2024-12-06 14:40:21.309133] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:14.568 14:40:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.568 14:40:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.568 [2024-12-06 14:40:21.314568] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19ec840 was disconnected and freed. delete nvme_qpair. 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.568 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.568 14:40:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.568 14:40:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.568 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.568 14:40:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.568 14:40:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.504 14:40:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.504 14:40:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.504 14:40:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.504 14:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.504 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:15.504 14:40:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.504 14:40:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.762 14:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.762 14:40:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.762 14:40:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.699 14:40:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.699 14:40:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.699 14:40:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.699 14:40:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.635 14:40:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.635 14:40:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.635 14:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.635 14:40:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.635 14:40:24 -- common/autotest_common.sh@10 -- # set +x 00:26:17.635 14:40:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.635 14:40:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.893 14:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.894 14:40:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.894 14:40:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.827 14:40:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.827 14:40:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.828 14:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.828 14:40:25 -- common/autotest_common.sh@10 -- # set +x 00:26:18.828 14:40:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.828 14:40:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.828 14:40:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.828 14:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.828 14:40:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.828 14:40:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.793 14:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.793 14:40:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.793 14:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.793 [2024-12-06 14:40:26.735909] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:19.793 [2024-12-06 14:40:26.736006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.793 [2024-12-06 14:40:26.736021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.793 [2024-12-06 14:40:26.736034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.793 [2024-12-06 14:40:26.736043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.793 [2024-12-06 14:40:26.736053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.793 [2024-12-06 14:40:26.736061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.793 [2024-12-06 14:40:26.736071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.793 [2024-12-06 14:40:26.736080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.793 [2024-12-06 14:40:26.736089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.793 [2024-12-06 14:40:26.736097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.793 [2024-12-06 14:40:26.736106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19639f0 is same with the state(5) to be set 00:26:19.793 [2024-12-06 14:40:26.745905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19639f0 (9): Bad file descriptor 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.793 14:40:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.793 [2024-12-06 14:40:26.755929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:21.166 14:40:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.166 14:40:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.166 14:40:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.166 14:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.166 14:40:27 -- common/autotest_common.sh@10 -- # set +x 00:26:21.166 14:40:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.166 14:40:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.166 [2024-12-06 14:40:27.787556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:22.100 [2024-12-06 14:40:28.811557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:22.101 [2024-12-06 14:40:28.811684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19639f0 with addr=10.0.0.2, port=4420 00:26:22.101 [2024-12-06 14:40:28.811722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19639f0 is same with the state(5) to be set 00:26:22.101 [2024-12-06 14:40:28.811790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:22.101 [2024-12-06 14:40:28.811814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:22.101 [2024-12-06 14:40:28.811834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:22.101 [2024-12-06 14:40:28.811860] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:22.101 [2024-12-06 14:40:28.812711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19639f0 (9): Bad file descriptor 00:26:22.101 [2024-12-06 14:40:28.812775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.101 [2024-12-06 14:40:28.812827] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:22.101 [2024-12-06 14:40:28.812896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.101 [2024-12-06 14:40:28.812926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.101 [2024-12-06 14:40:28.812954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.101 [2024-12-06 14:40:28.812975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.101 [2024-12-06 14:40:28.812999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.101 [2024-12-06 14:40:28.813029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.101 [2024-12-06 14:40:28.813051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.101 [2024-12-06 14:40:28.813073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.101 [2024-12-06 14:40:28.813096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.101 [2024-12-06 14:40:28.813117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.101 [2024-12-06 14:40:28.813138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:22.101 [2024-12-06 14:40:28.813169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1963e00 (9): Bad file descriptor 00:26:22.101 [2024-12-06 14:40:28.813805] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:22.101 [2024-12-06 14:40:28.813840] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:22.101 14:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.101 14:40:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.101 14:40:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.035 14:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.035 14:40:29 -- common/autotest_common.sh@10 -- # set +x 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.035 14:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.035 14:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.035 14:40:29 -- common/autotest_common.sh@10 -- # set +x 00:26:23.035 14:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:23.035 14:40:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.968 [2024-12-06 14:40:30.817307] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.968 [2024-12-06 14:40:30.817339] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.968 [2024-12-06 14:40:30.817372] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.968 [2024-12-06 14:40:30.904475] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:24.227 [2024-12-06 14:40:30.959817] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:24.227 [2024-12-06 14:40:30.959895] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:24.227 [2024-12-06 14:40:30.959920] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:24.227 [2024-12-06 14:40:30.959937] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:24.227 [2024-12-06 14:40:30.959946] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.227 [2024-12-06 14:40:30.965887] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19a7080 was disconnected and freed. delete nvme_qpair. 00:26:24.227 14:40:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.227 14:40:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.227 14:40:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.227 14:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.227 14:40:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.227 14:40:30 -- common/autotest_common.sh@10 -- # set +x 00:26:24.227 14:40:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.227 14:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.227 14:40:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:24.227 14:40:31 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:24.227 14:40:31 -- host/discovery_remove_ifc.sh@90 -- # killprocess 86696 00:26:24.227 14:40:31 -- common/autotest_common.sh@936 -- # '[' -z 86696 ']' 00:26:24.227 14:40:31 -- common/autotest_common.sh@940 -- # kill -0 86696 00:26:24.227 14:40:31 -- common/autotest_common.sh@941 -- # uname 00:26:24.227 14:40:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.227 14:40:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86696 00:26:24.227 killing process with pid 86696 00:26:24.227 14:40:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:24.227 14:40:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:24.227 14:40:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86696' 00:26:24.227 14:40:31 -- common/autotest_common.sh@955 -- # kill 86696 00:26:24.227 14:40:31 -- common/autotest_common.sh@960 -- # wait 86696 00:26:24.486 14:40:31 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:24.486 14:40:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:24.486 14:40:31 -- nvmf/common.sh@116 -- # sync 00:26:24.486 14:40:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:24.486 14:40:31 -- nvmf/common.sh@119 -- # set +e 00:26:24.486 14:40:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:24.486 14:40:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:24.486 rmmod nvme_tcp 00:26:24.486 rmmod nvme_fabrics 00:26:24.486 rmmod nvme_keyring 00:26:24.486 14:40:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:24.486 14:40:31 -- nvmf/common.sh@123 -- # set -e 00:26:24.486 14:40:31 -- nvmf/common.sh@124 -- # return 0 00:26:24.486 14:40:31 -- nvmf/common.sh@477 -- # '[' -n 86647 ']' 00:26:24.486 14:40:31 -- nvmf/common.sh@478 -- # killprocess 86647 00:26:24.486 14:40:31 -- common/autotest_common.sh@936 -- # '[' -z 86647 ']' 00:26:24.486 14:40:31 -- common/autotest_common.sh@940 -- # kill -0 86647 00:26:24.486 14:40:31 -- common/autotest_common.sh@941 -- # uname 00:26:24.486 14:40:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.486 14:40:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86647 00:26:24.486 killing process with pid 86647 00:26:24.486 14:40:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:24.486 14:40:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:24.486 14:40:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86647' 00:26:24.486 14:40:31 -- common/autotest_common.sh@955 -- # kill 86647 00:26:24.486 14:40:31 -- common/autotest_common.sh@960 -- # wait 86647 00:26:24.745 14:40:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:24.745 14:40:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:24.745 14:40:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:24.745 14:40:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.745 14:40:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:24.745 14:40:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.745 14:40:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.745 14:40:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.745 14:40:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:24.745 00:26:24.745 real 0m14.352s 00:26:24.745 user 0m24.463s 00:26:24.745 sys 0m1.659s 00:26:24.745 14:40:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:24.745 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:26:24.745 ************************************ 00:26:24.745 END TEST nvmf_discovery_remove_ifc 00:26:24.745 ************************************ 00:26:25.004 14:40:31 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:26:25.004 14:40:31 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:25.004 14:40:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:25.004 14:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:25.004 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:26:25.004 ************************************ 00:26:25.004 START TEST nvmf_digest 00:26:25.004 ************************************ 00:26:25.004 14:40:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:25.004 * Looking for test storage... 00:26:25.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:25.004 14:40:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:25.004 14:40:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:25.004 14:40:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:25.004 14:40:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:25.004 14:40:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:25.004 14:40:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:25.004 14:40:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:25.004 14:40:31 -- scripts/common.sh@335 -- # IFS=.-: 00:26:25.004 14:40:31 -- scripts/common.sh@335 -- # read -ra ver1 00:26:25.004 14:40:31 -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.004 14:40:31 -- scripts/common.sh@336 -- # read -ra ver2 00:26:25.004 14:40:31 -- scripts/common.sh@337 -- # local 'op=<' 00:26:25.004 14:40:31 -- scripts/common.sh@339 -- # ver1_l=2 00:26:25.004 14:40:31 -- scripts/common.sh@340 -- # ver2_l=1 00:26:25.004 14:40:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:25.004 14:40:31 -- scripts/common.sh@343 -- # case "$op" in 00:26:25.004 14:40:31 -- scripts/common.sh@344 -- # : 1 00:26:25.004 14:40:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:25.004 14:40:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.004 14:40:31 -- scripts/common.sh@364 -- # decimal 1 00:26:25.004 14:40:31 -- scripts/common.sh@352 -- # local d=1 00:26:25.004 14:40:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.004 14:40:31 -- scripts/common.sh@354 -- # echo 1 00:26:25.004 14:40:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:25.004 14:40:31 -- scripts/common.sh@365 -- # decimal 2 00:26:25.004 14:40:31 -- scripts/common.sh@352 -- # local d=2 00:26:25.004 14:40:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.004 14:40:31 -- scripts/common.sh@354 -- # echo 2 00:26:25.004 14:40:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:25.004 14:40:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:25.004 14:40:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:25.004 14:40:31 -- scripts/common.sh@367 -- # return 0 00:26:25.004 14:40:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.005 14:40:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:25.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.005 --rc genhtml_branch_coverage=1 00:26:25.005 --rc genhtml_function_coverage=1 00:26:25.005 --rc genhtml_legend=1 00:26:25.005 --rc geninfo_all_blocks=1 00:26:25.005 --rc geninfo_unexecuted_blocks=1 00:26:25.005 00:26:25.005 ' 00:26:25.005 14:40:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:25.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.005 --rc genhtml_branch_coverage=1 00:26:25.005 --rc genhtml_function_coverage=1 00:26:25.005 --rc genhtml_legend=1 00:26:25.005 --rc geninfo_all_blocks=1 00:26:25.005 --rc geninfo_unexecuted_blocks=1 00:26:25.005 00:26:25.005 ' 00:26:25.005 14:40:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:25.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.005 --rc genhtml_branch_coverage=1 00:26:25.005 --rc genhtml_function_coverage=1 00:26:25.005 --rc genhtml_legend=1 00:26:25.005 --rc geninfo_all_blocks=1 00:26:25.005 --rc geninfo_unexecuted_blocks=1 00:26:25.005 00:26:25.005 ' 00:26:25.005 14:40:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:25.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.005 --rc genhtml_branch_coverage=1 00:26:25.005 --rc genhtml_function_coverage=1 00:26:25.005 --rc genhtml_legend=1 00:26:25.005 --rc geninfo_all_blocks=1 00:26:25.005 --rc geninfo_unexecuted_blocks=1 00:26:25.005 00:26:25.005 ' 00:26:25.005 14:40:31 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:25.005 14:40:31 -- nvmf/common.sh@7 -- # uname -s 00:26:25.005 14:40:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.005 14:40:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.005 14:40:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.005 14:40:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.005 14:40:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.005 14:40:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.005 14:40:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.005 14:40:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.005 14:40:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.005 14:40:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.005 14:40:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:26:25.005 14:40:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:26:25.005 14:40:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.005 14:40:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.005 14:40:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:25.005 14:40:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.005 14:40:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.005 14:40:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.005 14:40:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.005 14:40:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.005 14:40:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.005 14:40:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.005 14:40:31 -- paths/export.sh@5 -- # export PATH 00:26:25.005 14:40:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.005 14:40:31 -- nvmf/common.sh@46 -- # : 0 00:26:25.005 14:40:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.005 14:40:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.005 14:40:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.005 14:40:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.005 14:40:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.005 14:40:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.005 14:40:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.005 14:40:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.005 14:40:31 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:25.005 14:40:31 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:25.005 14:40:31 -- host/digest.sh@16 -- # runtime=2 00:26:25.005 14:40:31 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:26:25.005 14:40:31 -- host/digest.sh@132 -- # nvmftestinit 00:26:25.005 14:40:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:25.005 14:40:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.005 14:40:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:25.005 14:40:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:25.005 14:40:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:25.005 14:40:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.005 14:40:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.005 14:40:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.005 14:40:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:25.005 14:40:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:25.005 14:40:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:25.005 14:40:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:25.005 14:40:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:25.005 14:40:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:25.005 14:40:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.005 14:40:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.005 14:40:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:25.005 14:40:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:25.005 14:40:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:25.005 14:40:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:25.005 14:40:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:25.005 14:40:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.005 14:40:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:25.005 14:40:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:25.005 14:40:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:25.005 14:40:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:25.005 14:40:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:25.005 14:40:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:25.005 Cannot find device "nvmf_tgt_br" 00:26:25.005 14:40:31 -- nvmf/common.sh@154 -- # true 00:26:25.005 14:40:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:25.005 Cannot find device "nvmf_tgt_br2" 00:26:25.005 14:40:31 -- nvmf/common.sh@155 -- # true 00:26:25.005 14:40:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:25.264 14:40:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:25.264 Cannot find device "nvmf_tgt_br" 00:26:25.264 14:40:31 -- nvmf/common.sh@157 -- # true 00:26:25.264 14:40:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:25.264 Cannot find device "nvmf_tgt_br2" 00:26:25.264 14:40:31 -- nvmf/common.sh@158 -- # true 00:26:25.264 14:40:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:25.264 14:40:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:25.264 14:40:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:25.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:25.264 14:40:32 -- nvmf/common.sh@161 -- # true 00:26:25.264 14:40:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:25.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:25.264 14:40:32 -- nvmf/common.sh@162 -- # true 00:26:25.264 14:40:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:25.264 14:40:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:25.264 14:40:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:25.264 14:40:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:25.264 14:40:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:25.264 14:40:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:25.264 14:40:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:25.264 14:40:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:25.264 14:40:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:25.264 14:40:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:25.264 14:40:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:25.264 14:40:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:25.264 14:40:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:25.264 14:40:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:25.264 14:40:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:25.264 14:40:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:25.264 14:40:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:25.264 14:40:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:25.264 14:40:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:25.264 14:40:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:25.264 14:40:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:25.264 14:40:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:25.524 14:40:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:25.524 14:40:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:25.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:26:25.524 00:26:25.524 --- 10.0.0.2 ping statistics --- 00:26:25.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.524 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:26:25.524 14:40:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:25.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:25.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:26:25.524 00:26:25.524 --- 10.0.0.3 ping statistics --- 00:26:25.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.524 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:25.524 14:40:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:25.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:25.524 00:26:25.524 --- 10.0.0.1 ping statistics --- 00:26:25.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.524 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:25.524 14:40:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.524 14:40:32 -- nvmf/common.sh@421 -- # return 0 00:26:25.524 14:40:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:25.524 14:40:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.524 14:40:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:25.524 14:40:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:25.524 14:40:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.524 14:40:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:25.524 14:40:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:25.524 14:40:32 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:25.524 14:40:32 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:26:25.524 14:40:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:25.524 14:40:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:25.524 14:40:32 -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 ************************************ 00:26:25.524 START TEST nvmf_digest_clean 00:26:25.524 ************************************ 00:26:25.524 14:40:32 -- common/autotest_common.sh@1114 -- # run_digest 00:26:25.524 14:40:32 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:26:25.524 14:40:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:25.524 14:40:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.524 14:40:32 -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 14:40:32 -- nvmf/common.sh@469 -- # nvmfpid=87115 00:26:25.524 14:40:32 -- nvmf/common.sh@470 -- # waitforlisten 87115 00:26:25.524 14:40:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:25.524 14:40:32 -- common/autotest_common.sh@829 -- # '[' -z 87115 ']' 00:26:25.524 14:40:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.524 14:40:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.524 14:40:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.524 14:40:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.524 14:40:32 -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 [2024-12-06 14:40:32.342224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:25.524 [2024-12-06 14:40:32.342339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.524 [2024-12-06 14:40:32.477106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.786 [2024-12-06 14:40:32.573989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:25.786 [2024-12-06 14:40:32.574174] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.786 [2024-12-06 14:40:32.574186] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.786 [2024-12-06 14:40:32.574194] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.786 [2024-12-06 14:40:32.574239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.723 14:40:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.723 14:40:33 -- common/autotest_common.sh@862 -- # return 0 00:26:26.723 14:40:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:26.724 14:40:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.724 14:40:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.724 14:40:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.724 14:40:33 -- host/digest.sh@120 -- # common_target_config 00:26:26.724 14:40:33 -- host/digest.sh@43 -- # rpc_cmd 00:26:26.724 14:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.724 14:40:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.724 null0 00:26:26.724 [2024-12-06 14:40:33.518309] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.724 [2024-12-06 14:40:33.542379] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.724 14:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.724 14:40:33 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:26:26.724 14:40:33 -- host/digest.sh@77 -- # local rw bs qd 00:26:26.724 14:40:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:26.724 14:40:33 -- host/digest.sh@80 -- # rw=randread 00:26:26.724 14:40:33 -- host/digest.sh@80 -- # bs=4096 00:26:26.724 14:40:33 -- host/digest.sh@80 -- # qd=128 00:26:26.724 14:40:33 -- host/digest.sh@82 -- # bperfpid=87166 00:26:26.724 14:40:33 -- host/digest.sh@83 -- # waitforlisten 87166 /var/tmp/bperf.sock 00:26:26.724 14:40:33 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:26.724 14:40:33 -- common/autotest_common.sh@829 -- # '[' -z 87166 ']' 00:26:26.724 14:40:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.724 14:40:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.724 14:40:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.724 14:40:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.724 14:40:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.724 [2024-12-06 14:40:33.606878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:26.724 [2024-12-06 14:40:33.606984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87166 ] 00:26:26.983 [2024-12-06 14:40:33.744177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.983 [2024-12-06 14:40:33.847984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.978 14:40:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.978 14:40:34 -- common/autotest_common.sh@862 -- # return 0 00:26:27.978 14:40:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:27.978 14:40:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:27.978 14:40:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:28.240 14:40:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.240 14:40:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.498 nvme0n1 00:26:28.498 14:40:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:28.498 14:40:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:28.757 Running I/O for 2 seconds... 00:26:30.661 00:26:30.661 Latency(us) 00:26:30.661 [2024-12-06T14:40:37.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.661 [2024-12-06T14:40:37.631Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:30.661 nvme0n1 : 2.01 18449.48 72.07 0.00 0.00 6931.90 3023.59 14656.23 00:26:30.661 [2024-12-06T14:40:37.631Z] =================================================================================================================== 00:26:30.661 [2024-12-06T14:40:37.631Z] Total : 18449.48 72.07 0.00 0.00 6931.90 3023.59 14656.23 00:26:30.661 0 00:26:30.661 14:40:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:30.661 14:40:37 -- host/digest.sh@92 -- # get_accel_stats 00:26:30.661 14:40:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:30.661 14:40:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:30.661 14:40:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:30.661 | select(.opcode=="crc32c") 00:26:30.661 | "\(.module_name) \(.executed)"' 00:26:30.919 14:40:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:30.919 14:40:37 -- host/digest.sh@93 -- # exp_module=software 00:26:30.919 14:40:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:30.919 14:40:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:30.919 14:40:37 -- host/digest.sh@97 -- # killprocess 87166 00:26:30.919 14:40:37 -- common/autotest_common.sh@936 -- # '[' -z 87166 ']' 00:26:30.919 14:40:37 -- common/autotest_common.sh@940 -- # kill -0 87166 00:26:30.919 14:40:37 -- common/autotest_common.sh@941 -- # uname 00:26:30.919 14:40:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.919 14:40:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87166 00:26:30.919 14:40:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:30.919 14:40:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:30.919 killing process with pid 87166 00:26:30.919 14:40:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87166' 00:26:30.919 Received shutdown signal, test time was about 2.000000 seconds 00:26:30.919 00:26:30.919 Latency(us) 00:26:30.919 [2024-12-06T14:40:37.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.920 [2024-12-06T14:40:37.890Z] =================================================================================================================== 00:26:30.920 [2024-12-06T14:40:37.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.920 14:40:37 -- common/autotest_common.sh@955 -- # kill 87166 00:26:30.920 14:40:37 -- common/autotest_common.sh@960 -- # wait 87166 00:26:31.178 14:40:38 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:26:31.178 14:40:38 -- host/digest.sh@77 -- # local rw bs qd 00:26:31.178 14:40:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:31.178 14:40:38 -- host/digest.sh@80 -- # rw=randread 00:26:31.178 14:40:38 -- host/digest.sh@80 -- # bs=131072 00:26:31.178 14:40:38 -- host/digest.sh@80 -- # qd=16 00:26:31.178 14:40:38 -- host/digest.sh@82 -- # bperfpid=87257 00:26:31.178 14:40:38 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:31.178 14:40:38 -- host/digest.sh@83 -- # waitforlisten 87257 /var/tmp/bperf.sock 00:26:31.178 14:40:38 -- common/autotest_common.sh@829 -- # '[' -z 87257 ']' 00:26:31.178 14:40:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.178 14:40:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.178 14:40:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.178 14:40:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.178 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:26:31.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.178 Zero copy mechanism will not be used. 00:26:31.178 [2024-12-06 14:40:38.092513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:31.178 [2024-12-06 14:40:38.092616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87257 ] 00:26:31.437 [2024-12-06 14:40:38.226439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.437 [2024-12-06 14:40:38.335529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.374 14:40:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.374 14:40:39 -- common/autotest_common.sh@862 -- # return 0 00:26:32.374 14:40:39 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:32.374 14:40:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:32.374 14:40:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:32.634 14:40:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.634 14:40:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.894 nvme0n1 00:26:32.894 14:40:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:32.894 14:40:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.153 Zero copy mechanism will not be used. 00:26:33.153 Running I/O for 2 seconds... 00:26:35.055 00:26:35.055 Latency(us) 00:26:35.055 [2024-12-06T14:40:42.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.055 [2024-12-06T14:40:42.025Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:35.055 nvme0n1 : 2.00 9019.50 1127.44 0.00 0.00 1770.89 580.89 5481.19 00:26:35.055 [2024-12-06T14:40:42.025Z] =================================================================================================================== 00:26:35.055 [2024-12-06T14:40:42.025Z] Total : 9019.50 1127.44 0.00 0.00 1770.89 580.89 5481.19 00:26:35.055 0 00:26:35.055 14:40:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:35.055 14:40:41 -- host/digest.sh@92 -- # get_accel_stats 00:26:35.055 14:40:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:35.055 14:40:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:35.055 | select(.opcode=="crc32c") 00:26:35.055 | "\(.module_name) \(.executed)"' 00:26:35.055 14:40:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:35.314 14:40:42 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:35.314 14:40:42 -- host/digest.sh@93 -- # exp_module=software 00:26:35.314 14:40:42 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:35.314 14:40:42 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:35.314 14:40:42 -- host/digest.sh@97 -- # killprocess 87257 00:26:35.314 14:40:42 -- common/autotest_common.sh@936 -- # '[' -z 87257 ']' 00:26:35.314 14:40:42 -- common/autotest_common.sh@940 -- # kill -0 87257 00:26:35.314 14:40:42 -- common/autotest_common.sh@941 -- # uname 00:26:35.314 14:40:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:35.314 14:40:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87257 00:26:35.314 14:40:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:35.314 14:40:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:35.314 14:40:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87257' 00:26:35.314 killing process with pid 87257 00:26:35.314 14:40:42 -- common/autotest_common.sh@955 -- # kill 87257 00:26:35.314 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.314 00:26:35.314 Latency(us) 00:26:35.314 [2024-12-06T14:40:42.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.314 [2024-12-06T14:40:42.284Z] =================================================================================================================== 00:26:35.314 [2024-12-06T14:40:42.284Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.314 14:40:42 -- common/autotest_common.sh@960 -- # wait 87257 00:26:35.581 14:40:42 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:26:35.581 14:40:42 -- host/digest.sh@77 -- # local rw bs qd 00:26:35.581 14:40:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:35.581 14:40:42 -- host/digest.sh@80 -- # rw=randwrite 00:26:35.581 14:40:42 -- host/digest.sh@80 -- # bs=4096 00:26:35.581 14:40:42 -- host/digest.sh@80 -- # qd=128 00:26:35.581 14:40:42 -- host/digest.sh@82 -- # bperfpid=87348 00:26:35.581 14:40:42 -- host/digest.sh@83 -- # waitforlisten 87348 /var/tmp/bperf.sock 00:26:35.581 14:40:42 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:35.581 14:40:42 -- common/autotest_common.sh@829 -- # '[' -z 87348 ']' 00:26:35.581 14:40:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.581 14:40:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.581 14:40:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.581 14:40:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.581 14:40:42 -- common/autotest_common.sh@10 -- # set +x 00:26:35.581 [2024-12-06 14:40:42.521062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:35.581 [2024-12-06 14:40:42.521154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87348 ] 00:26:35.864 [2024-12-06 14:40:42.657744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.864 [2024-12-06 14:40:42.758548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.812 14:40:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.812 14:40:43 -- common/autotest_common.sh@862 -- # return 0 00:26:36.812 14:40:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:36.812 14:40:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:36.812 14:40:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:37.070 14:40:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.070 14:40:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.327 nvme0n1 00:26:37.328 14:40:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:37.328 14:40:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.585 Running I/O for 2 seconds... 00:26:39.484 00:26:39.484 Latency(us) 00:26:39.484 [2024-12-06T14:40:46.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.484 [2024-12-06T14:40:46.454Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:39.484 nvme0n1 : 2.00 23879.82 93.28 0.00 0.00 5354.90 2025.66 9413.35 00:26:39.484 [2024-12-06T14:40:46.454Z] =================================================================================================================== 00:26:39.484 [2024-12-06T14:40:46.454Z] Total : 23879.82 93.28 0.00 0.00 5354.90 2025.66 9413.35 00:26:39.484 0 00:26:39.484 14:40:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:39.484 14:40:46 -- host/digest.sh@92 -- # get_accel_stats 00:26:39.484 14:40:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:39.484 14:40:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:39.484 14:40:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:39.484 | select(.opcode=="crc32c") 00:26:39.484 | "\(.module_name) \(.executed)"' 00:26:39.743 14:40:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:39.743 14:40:46 -- host/digest.sh@93 -- # exp_module=software 00:26:39.743 14:40:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:39.743 14:40:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:39.743 14:40:46 -- host/digest.sh@97 -- # killprocess 87348 00:26:39.743 14:40:46 -- common/autotest_common.sh@936 -- # '[' -z 87348 ']' 00:26:39.743 14:40:46 -- common/autotest_common.sh@940 -- # kill -0 87348 00:26:39.743 14:40:46 -- common/autotest_common.sh@941 -- # uname 00:26:39.743 14:40:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:39.743 14:40:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87348 00:26:39.743 killing process with pid 87348 00:26:39.743 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.743 00:26:39.743 Latency(us) 00:26:39.743 [2024-12-06T14:40:46.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.743 [2024-12-06T14:40:46.713Z] =================================================================================================================== 00:26:39.743 [2024-12-06T14:40:46.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.743 14:40:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:39.743 14:40:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:39.743 14:40:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87348' 00:26:39.743 14:40:46 -- common/autotest_common.sh@955 -- # kill 87348 00:26:39.743 14:40:46 -- common/autotest_common.sh@960 -- # wait 87348 00:26:40.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.002 14:40:46 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:26:40.002 14:40:46 -- host/digest.sh@77 -- # local rw bs qd 00:26:40.002 14:40:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.002 14:40:46 -- host/digest.sh@80 -- # rw=randwrite 00:26:40.002 14:40:46 -- host/digest.sh@80 -- # bs=131072 00:26:40.002 14:40:46 -- host/digest.sh@80 -- # qd=16 00:26:40.002 14:40:46 -- host/digest.sh@82 -- # bperfpid=87439 00:26:40.002 14:40:46 -- host/digest.sh@83 -- # waitforlisten 87439 /var/tmp/bperf.sock 00:26:40.002 14:40:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:40.002 14:40:46 -- common/autotest_common.sh@829 -- # '[' -z 87439 ']' 00:26:40.002 14:40:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.002 14:40:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.002 14:40:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.002 14:40:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.002 14:40:46 -- common/autotest_common.sh@10 -- # set +x 00:26:40.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.002 Zero copy mechanism will not be used. 00:26:40.002 [2024-12-06 14:40:46.909288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:40.002 [2024-12-06 14:40:46.909385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87439 ] 00:26:40.259 [2024-12-06 14:40:47.042093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.259 [2024-12-06 14:40:47.149895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.192 14:40:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.192 14:40:47 -- common/autotest_common.sh@862 -- # return 0 00:26:41.192 14:40:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:41.192 14:40:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:41.192 14:40:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:41.450 14:40:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.450 14:40:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.709 nvme0n1 00:26:41.709 14:40:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:41.709 14:40:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:41.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.709 Zero copy mechanism will not be used. 00:26:41.709 Running I/O for 2 seconds... 00:26:44.241 00:26:44.241 Latency(us) 00:26:44.241 [2024-12-06T14:40:51.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.241 [2024-12-06T14:40:51.211Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:44.241 nvme0n1 : 2.00 7808.74 976.09 0.00 0.00 2044.09 1697.98 8519.68 00:26:44.241 [2024-12-06T14:40:51.211Z] =================================================================================================================== 00:26:44.241 [2024-12-06T14:40:51.211Z] Total : 7808.74 976.09 0.00 0.00 2044.09 1697.98 8519.68 00:26:44.241 0 00:26:44.241 14:40:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:44.241 14:40:50 -- host/digest.sh@92 -- # get_accel_stats 00:26:44.241 14:40:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:44.241 14:40:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:44.241 | select(.opcode=="crc32c") 00:26:44.241 | "\(.module_name) \(.executed)"' 00:26:44.241 14:40:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:44.241 14:40:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:44.241 14:40:50 -- host/digest.sh@93 -- # exp_module=software 00:26:44.241 14:40:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:44.241 14:40:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:44.241 14:40:50 -- host/digest.sh@97 -- # killprocess 87439 00:26:44.241 14:40:50 -- common/autotest_common.sh@936 -- # '[' -z 87439 ']' 00:26:44.241 14:40:50 -- common/autotest_common.sh@940 -- # kill -0 87439 00:26:44.241 14:40:50 -- common/autotest_common.sh@941 -- # uname 00:26:44.242 14:40:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:44.242 14:40:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87439 00:26:44.242 killing process with pid 87439 00:26:44.242 Received shutdown signal, test time was about 2.000000 seconds 00:26:44.242 00:26:44.242 Latency(us) 00:26:44.242 [2024-12-06T14:40:51.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.242 [2024-12-06T14:40:51.212Z] =================================================================================================================== 00:26:44.242 [2024-12-06T14:40:51.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.242 14:40:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:44.242 14:40:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:44.242 14:40:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87439' 00:26:44.242 14:40:50 -- common/autotest_common.sh@955 -- # kill 87439 00:26:44.242 14:40:50 -- common/autotest_common.sh@960 -- # wait 87439 00:26:44.501 14:40:51 -- host/digest.sh@126 -- # killprocess 87115 00:26:44.501 14:40:51 -- common/autotest_common.sh@936 -- # '[' -z 87115 ']' 00:26:44.501 14:40:51 -- common/autotest_common.sh@940 -- # kill -0 87115 00:26:44.501 14:40:51 -- common/autotest_common.sh@941 -- # uname 00:26:44.501 14:40:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:44.501 14:40:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87115 00:26:44.501 killing process with pid 87115 00:26:44.501 14:40:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:44.501 14:40:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:44.501 14:40:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87115' 00:26:44.501 14:40:51 -- common/autotest_common.sh@955 -- # kill 87115 00:26:44.501 14:40:51 -- common/autotest_common.sh@960 -- # wait 87115 00:26:44.760 ************************************ 00:26:44.760 END TEST nvmf_digest_clean 00:26:44.760 ************************************ 00:26:44.760 00:26:44.760 real 0m19.224s 00:26:44.760 user 0m36.871s 00:26:44.760 sys 0m4.731s 00:26:44.760 14:40:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:44.760 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:44.760 14:40:51 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:26:44.760 14:40:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:44.760 14:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.760 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:44.760 ************************************ 00:26:44.760 START TEST nvmf_digest_error 00:26:44.760 ************************************ 00:26:44.760 14:40:51 -- common/autotest_common.sh@1114 -- # run_digest_error 00:26:44.760 14:40:51 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:26:44.760 14:40:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:44.760 14:40:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.760 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:44.760 14:40:51 -- nvmf/common.sh@469 -- # nvmfpid=87552 00:26:44.760 14:40:51 -- nvmf/common.sh@470 -- # waitforlisten 87552 00:26:44.760 14:40:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:44.760 14:40:51 -- common/autotest_common.sh@829 -- # '[' -z 87552 ']' 00:26:44.760 14:40:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.760 14:40:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.760 14:40:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.760 14:40:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.760 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:44.760 [2024-12-06 14:40:51.627750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.760 [2024-12-06 14:40:51.627860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.019 [2024-12-06 14:40:51.766649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.019 [2024-12-06 14:40:51.863500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:45.019 [2024-12-06 14:40:51.863641] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.019 [2024-12-06 14:40:51.863654] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.019 [2024-12-06 14:40:51.863664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.019 [2024-12-06 14:40:51.863700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.956 14:40:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.956 14:40:52 -- common/autotest_common.sh@862 -- # return 0 00:26:45.956 14:40:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:45.956 14:40:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.956 14:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:45.956 14:40:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.956 14:40:52 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:45.956 14:40:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.956 14:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:45.956 [2024-12-06 14:40:52.696225] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:45.956 14:40:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.956 14:40:52 -- host/digest.sh@104 -- # common_target_config 00:26:45.956 14:40:52 -- host/digest.sh@43 -- # rpc_cmd 00:26:45.956 14:40:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.956 14:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:45.956 null0 00:26:45.956 [2024-12-06 14:40:52.814530] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.956 [2024-12-06 14:40:52.838634] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.956 14:40:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.956 14:40:52 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:26:45.956 14:40:52 -- host/digest.sh@54 -- # local rw bs qd 00:26:45.956 14:40:52 -- host/digest.sh@56 -- # rw=randread 00:26:45.956 14:40:52 -- host/digest.sh@56 -- # bs=4096 00:26:45.956 14:40:52 -- host/digest.sh@56 -- # qd=128 00:26:45.956 14:40:52 -- host/digest.sh@58 -- # bperfpid=87596 00:26:45.956 14:40:52 -- host/digest.sh@60 -- # waitforlisten 87596 /var/tmp/bperf.sock 00:26:45.956 14:40:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:45.956 14:40:52 -- common/autotest_common.sh@829 -- # '[' -z 87596 ']' 00:26:45.957 14:40:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.957 14:40:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.957 14:40:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.957 14:40:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.957 14:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:45.957 [2024-12-06 14:40:52.903836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:45.957 [2024-12-06 14:40:52.903941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87596 ] 00:26:46.215 [2024-12-06 14:40:53.043677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.215 [2024-12-06 14:40:53.151976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.163 14:40:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.163 14:40:53 -- common/autotest_common.sh@862 -- # return 0 00:26:47.163 14:40:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:47.163 14:40:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:47.423 14:40:54 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:47.423 14:40:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.423 14:40:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.423 14:40:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.423 14:40:54 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.423 14:40:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.681 nvme0n1 00:26:47.681 14:40:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:47.681 14:40:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.681 14:40:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.681 14:40:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.681 14:40:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:47.681 14:40:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.681 Running I/O for 2 seconds... 00:26:47.681 [2024-12-06 14:40:54.571789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.571869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.571883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.681 [2024-12-06 14:40:54.583796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.583827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.583839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.681 [2024-12-06 14:40:54.597063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.597094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.597108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.681 [2024-12-06 14:40:54.608921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.608952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.608964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.681 [2024-12-06 14:40:54.620592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.620622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.620634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.681 [2024-12-06 14:40:54.630483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.630512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.630523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.681 [2024-12-06 14:40:54.641619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.681 [2024-12-06 14:40:54.641648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.681 [2024-12-06 14:40:54.641668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.651243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.651273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.651284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.662971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.663002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.663014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.672907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.672936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.672947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.684992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.685022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.685036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.694232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.694263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.694273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.705416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.705445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.705455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.715124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.715154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.715165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.726052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.726082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.726110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.738813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.738845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.738856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.751381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.751423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.751436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.762252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.762281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.762294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.773808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.773838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.773851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.783093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.783124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.794599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.794628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.794642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.806329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.806359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.816543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.816572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.816582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.826908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.826937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.826950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.837298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.837328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.837339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.854181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.854277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.854298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.873266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.873338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.873359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.941 [2024-12-06 14:40:54.890267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:47.941 [2024-12-06 14:40:54.890335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.941 [2024-12-06 14:40:54.890354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.207 [2024-12-06 14:40:54.909455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.207 [2024-12-06 14:40:54.909522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.207 [2024-12-06 14:40:54.909544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.207 [2024-12-06 14:40:54.928201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.207 [2024-12-06 14:40:54.928270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.207 [2024-12-06 14:40:54.928294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.207 [2024-12-06 14:40:54.946878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.207 [2024-12-06 14:40:54.946947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.207 [2024-12-06 14:40:54.946969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.207 [2024-12-06 14:40:54.965804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.207 [2024-12-06 14:40:54.965881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.207 [2024-12-06 14:40:54.965902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.207 [2024-12-06 14:40:54.980555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.207 [2024-12-06 14:40:54.980611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:54.980625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:54.995437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:54.995487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:54.995501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.011120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.011173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.011185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.026726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.026778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.026790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.039125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.039177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.039189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.049660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.049735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.049748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.060553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.060603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.060615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.071309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.071361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.082372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.082433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.208 [2024-12-06 14:40:55.082447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.208 [2024-12-06 14:40:55.093476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.208 [2024-12-06 14:40:55.093527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.209 [2024-12-06 14:40:55.093539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.209 [2024-12-06 14:40:55.103567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.209 [2024-12-06 14:40:55.103619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.209 [2024-12-06 14:40:55.103630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.209 [2024-12-06 14:40:55.113773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.209 [2024-12-06 14:40:55.113826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.209 [2024-12-06 14:40:55.113839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.209 [2024-12-06 14:40:55.127394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.209 [2024-12-06 14:40:55.127455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.210 [2024-12-06 14:40:55.127468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.210 [2024-12-06 14:40:55.140485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.210 [2024-12-06 14:40:55.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.210 [2024-12-06 14:40:55.140550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.210 [2024-12-06 14:40:55.153917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.210 [2024-12-06 14:40:55.153997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.210 [2024-12-06 14:40:55.154010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.210 [2024-12-06 14:40:55.167457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.210 [2024-12-06 14:40:55.167508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.210 [2024-12-06 14:40:55.167520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.472 [2024-12-06 14:40:55.180694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.472 [2024-12-06 14:40:55.180748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.180761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.191541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.191594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.191607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.202672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.202722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.202735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.212975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.213027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.213039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.223345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.223398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.223410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.236967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.237018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.237030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.251164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.251216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.251228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.263313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.263365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.263377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.277609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.277660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.291497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.291547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.291560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.305173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.305225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.305236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.319679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.319731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.319743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.333156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.333208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.333220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.342435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.342499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.342511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.356440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.356490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.356503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.370151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.370204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.370216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.383535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.383599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.394514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.394565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.394577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.405137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.405189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.405201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.415198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.415251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.415278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.473 [2024-12-06 14:40:55.428598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.473 [2024-12-06 14:40:55.428652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.473 [2024-12-06 14:40:55.428665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.441444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.441509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.441523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.453585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.453636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.453648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.465345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.465397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.465409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.476339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.476390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.476402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.486970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.487021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.487033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.501794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.501846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.501859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.515618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.515669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.515681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.525155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.525206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.525219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.538697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.538749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.538761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.552915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.552967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.552978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.566968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.567019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.567031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.581126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.581179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.581191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.594082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.594134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.594147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.604424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.604487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.604500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.615677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.615728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.627411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.627477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.627491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.642979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.643019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.643042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.656361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.656413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.656455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.668071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.668124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-12-06 14:40:55.668136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.732 [2024-12-06 14:40:55.682628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.732 [2024-12-06 14:40:55.682682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-12-06 14:40:55.682695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.733 [2024-12-06 14:40:55.696841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.733 [2024-12-06 14:40:55.696893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-12-06 14:40:55.696906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.712264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.712319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.712332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.727418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.727483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.742563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.742616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.742630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.756829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.756883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.756895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.770929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.770983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.770997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.781481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.781534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.781548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.792463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.792515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.792528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.806888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.806920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.806934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.822773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.822827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.822840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.837326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.837378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.837390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.851406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.851468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.851480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.991 [2024-12-06 14:40:55.865792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.991 [2024-12-06 14:40:55.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.991 [2024-12-06 14:40:55.865843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.876310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.876361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.876373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.889623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.889699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.901830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.901882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.901895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.913326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.913378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.913390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.923504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.923556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.923569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.937803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.937856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.937870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.992 [2024-12-06 14:40:55.951266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:48.992 [2024-12-06 14:40:55.951318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.992 [2024-12-06 14:40:55.951330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:55.962084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:55.962135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:55.962147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:55.975301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:55.975353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:55.975365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:55.986804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:55.986857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:55.986870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:55.999286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:55.999323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:55.999335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.012722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.012806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.012835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.024465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.024527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.024541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.036096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.036148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.036161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.046909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.046961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.046974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.057376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.057436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.057450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.068741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.068792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.068805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.078858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.078910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.089140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.089193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.089206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.102061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.102112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.102124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.111984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.112035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.112047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.126104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.126155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.126167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.140568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.140619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.140631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.250 [2024-12-06 14:40:56.153702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.250 [2024-12-06 14:40:56.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.250 [2024-12-06 14:40:56.153766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.251 [2024-12-06 14:40:56.167568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.251 [2024-12-06 14:40:56.167619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.251 [2024-12-06 14:40:56.167632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.251 [2024-12-06 14:40:56.181611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.251 [2024-12-06 14:40:56.181670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.251 [2024-12-06 14:40:56.181702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.251 [2024-12-06 14:40:56.195167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.251 [2024-12-06 14:40:56.195219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.251 [2024-12-06 14:40:56.195231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.251 [2024-12-06 14:40:56.207712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.251 [2024-12-06 14:40:56.207765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.251 [2024-12-06 14:40:56.207778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.251 [2024-12-06 14:40:56.218138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.251 [2024-12-06 14:40:56.218189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.251 [2024-12-06 14:40:56.218200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.229358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.229411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.229434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.242104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.242156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.242168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.254820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.254872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.254884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.267105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.267157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.267169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.281619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.281696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.281710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.293289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.293340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.293352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.303782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.303818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.303846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.317180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.317233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.317246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.330701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.330753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.330766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.343419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.343470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.343482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.358373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.509 [2024-12-06 14:40:56.358449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.509 [2024-12-06 14:40:56.370964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.509 [2024-12-06 14:40:56.371016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.371028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.382090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.382143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.382155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.395648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.395700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.395712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.409140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.409193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.409206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.422566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.422617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.422629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.436505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.436556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.436569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.450047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.450099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.450113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.510 [2024-12-06 14:40:56.465531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.510 [2024-12-06 14:40:56.465583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.510 [2024-12-06 14:40:56.465596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 [2024-12-06 14:40:56.480315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.769 [2024-12-06 14:40:56.480368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.769 [2024-12-06 14:40:56.480380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 [2024-12-06 14:40:56.494207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.769 [2024-12-06 14:40:56.494258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.769 [2024-12-06 14:40:56.494270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 [2024-12-06 14:40:56.508955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.769 [2024-12-06 14:40:56.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.769 [2024-12-06 14:40:56.509020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 [2024-12-06 14:40:56.522792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.769 [2024-12-06 14:40:56.522861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.769 [2024-12-06 14:40:56.522890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 [2024-12-06 14:40:56.536747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.769 [2024-12-06 14:40:56.536800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.769 [2024-12-06 14:40:56.536828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 [2024-12-06 14:40:56.549671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1157f50) 00:26:49.769 [2024-12-06 14:40:56.549739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.769 [2024-12-06 14:40:56.549751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.769 00:26:49.769 Latency(us) 00:26:49.769 [2024-12-06T14:40:56.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.769 [2024-12-06T14:40:56.739Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.769 nvme0n1 : 2.00 19903.49 77.75 0.00 0.00 6425.45 2576.76 27525.12 00:26:49.769 [2024-12-06T14:40:56.739Z] =================================================================================================================== 00:26:49.769 [2024-12-06T14:40:56.739Z] Total : 19903.49 77.75 0.00 0.00 6425.45 2576.76 27525.12 00:26:49.769 0 00:26:49.769 14:40:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:49.769 14:40:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:49.769 14:40:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:49.769 14:40:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:49.769 | .driver_specific 00:26:49.769 | .nvme_error 00:26:49.769 | .status_code 00:26:49.769 | .command_transient_transport_error' 00:26:50.028 14:40:56 -- host/digest.sh@71 -- # (( 156 > 0 )) 00:26:50.028 14:40:56 -- host/digest.sh@73 -- # killprocess 87596 00:26:50.028 14:40:56 -- common/autotest_common.sh@936 -- # '[' -z 87596 ']' 00:26:50.028 14:40:56 -- common/autotest_common.sh@940 -- # kill -0 87596 00:26:50.028 14:40:56 -- common/autotest_common.sh@941 -- # uname 00:26:50.028 14:40:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:50.028 14:40:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87596 00:26:50.028 14:40:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:50.028 14:40:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:50.028 killing process with pid 87596 00:26:50.028 14:40:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87596' 00:26:50.028 14:40:56 -- common/autotest_common.sh@955 -- # kill 87596 00:26:50.028 Received shutdown signal, test time was about 2.000000 seconds 00:26:50.028 00:26:50.028 Latency(us) 00:26:50.028 [2024-12-06T14:40:56.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.028 [2024-12-06T14:40:56.998Z] =================================================================================================================== 00:26:50.028 [2024-12-06T14:40:56.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.028 14:40:56 -- common/autotest_common.sh@960 -- # wait 87596 00:26:50.286 14:40:57 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:50.286 14:40:57 -- host/digest.sh@54 -- # local rw bs qd 00:26:50.286 14:40:57 -- host/digest.sh@56 -- # rw=randread 00:26:50.286 14:40:57 -- host/digest.sh@56 -- # bs=131072 00:26:50.286 14:40:57 -- host/digest.sh@56 -- # qd=16 00:26:50.286 14:40:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:50.286 14:40:57 -- host/digest.sh@58 -- # bperfpid=87692 00:26:50.286 14:40:57 -- host/digest.sh@60 -- # waitforlisten 87692 /var/tmp/bperf.sock 00:26:50.286 14:40:57 -- common/autotest_common.sh@829 -- # '[' -z 87692 ']' 00:26:50.286 14:40:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.286 14:40:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.286 14:40:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.286 14:40:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.286 14:40:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.286 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.286 Zero copy mechanism will not be used. 00:26:50.286 [2024-12-06 14:40:57.189199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:50.286 [2024-12-06 14:40:57.189307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87692 ] 00:26:50.545 [2024-12-06 14:40:57.320664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.545 [2024-12-06 14:40:57.412270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.481 14:40:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.481 14:40:58 -- common/autotest_common.sh@862 -- # return 0 00:26:51.481 14:40:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:51.481 14:40:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:51.481 14:40:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:51.481 14:40:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.481 14:40:58 -- common/autotest_common.sh@10 -- # set +x 00:26:51.481 14:40:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.481 14:40:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.481 14:40:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.048 nvme0n1 00:26:52.048 14:40:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:52.048 14:40:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.048 14:40:58 -- common/autotest_common.sh@10 -- # set +x 00:26:52.048 14:40:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.048 14:40:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:52.048 14:40:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.048 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.048 Zero copy mechanism will not be used. 00:26:52.048 Running I/O for 2 seconds... 00:26:52.048 [2024-12-06 14:40:58.878539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.048 [2024-12-06 14:40:58.878607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.048 [2024-12-06 14:40:58.878621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.048 [2024-12-06 14:40:58.883346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.048 [2024-12-06 14:40:58.883401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.048 [2024-12-06 14:40:58.883415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.048 [2024-12-06 14:40:58.887500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.048 [2024-12-06 14:40:58.887553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.048 [2024-12-06 14:40:58.887565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.892198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.892254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.892282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.895694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.895749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.895762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.899875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.899929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.899941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.903686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.903740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.903753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.907612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.907664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.907677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.911628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.911682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.911694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.914944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.914999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.915011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.918595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.918648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.918660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.921832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.921886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.921899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.925354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.925405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.925434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.929361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.929413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.929443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.933841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.933881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.933894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.937673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.937725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.937738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.941567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.941619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.941632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.945167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.945231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.948239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.948290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.948303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.951525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.951574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.951585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.955788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.955842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.955854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.959785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.959837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.959849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.963794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.963848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.963860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.967570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.967624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.967637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.971488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.971541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.971554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.975357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.975410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.975433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.979341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.979396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.979408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.982511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.982562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.982574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.986264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.986315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.049 [2024-12-06 14:40:58.986327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.049 [2024-12-06 14:40:58.989858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.049 [2024-12-06 14:40:58.989911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:58.989923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:58.993838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:58.993890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:58.993903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:58.997367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:58.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:58.997457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:59.001057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:59.001108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:59.001119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:59.004730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:59.004780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:59.004792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:59.008163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:59.008217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:59.008230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:59.011993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:59.012045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:59.012057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.050 [2024-12-06 14:40:59.016202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.050 [2024-12-06 14:40:59.016255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.050 [2024-12-06 14:40:59.016267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.310 [2024-12-06 14:40:59.019442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.310 [2024-12-06 14:40:59.019493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.310 [2024-12-06 14:40:59.019506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.310 [2024-12-06 14:40:59.023729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.310 [2024-12-06 14:40:59.023783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.310 [2024-12-06 14:40:59.023795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.310 [2024-12-06 14:40:59.027037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.310 [2024-12-06 14:40:59.027090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.310 [2024-12-06 14:40:59.027103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.310 [2024-12-06 14:40:59.031188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.310 [2024-12-06 14:40:59.031240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.310 [2024-12-06 14:40:59.031252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.310 [2024-12-06 14:40:59.034862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.310 [2024-12-06 14:40:59.034915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.310 [2024-12-06 14:40:59.034928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.038006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.038060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.038087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.042537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.042591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.042603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.046155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.046210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.046237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.050045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.050099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.050112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.054051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.054105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.058214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.058296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.058308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.061744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.061784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.061797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.065677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.065714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.065726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.069793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.069832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.069845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.073783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.073823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.073836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.077313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.077366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.077378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.080901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.080952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.080964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.084506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.084543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.084556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.088415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.088484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.088497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.092031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.092084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.092096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.095989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.096043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.096056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.099202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.099255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.099267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.102399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.102479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.102492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.105768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.105807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.109247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.109298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.109310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.113289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.113340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.113352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.116943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.116994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.117007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.120721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.120772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.120784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.124378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.124444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.124459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.128875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.128929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.128941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.132586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.132639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.311 [2024-12-06 14:40:59.132652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.311 [2024-12-06 14:40:59.136154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.311 [2024-12-06 14:40:59.136206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.136219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.140516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.140569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.140581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.144374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.144441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.144455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.148352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.148405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.148417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.152738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.152778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.152791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.156846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.156917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.156929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.161048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.161103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.161115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.165107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.165159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.165172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.169326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.169379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.169391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.173502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.173552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.173565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.177350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.177398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.177425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.181895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.181932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.181945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.185781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.185818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.185831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.189428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.189491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.189504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.193791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.193829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.193841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.198407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.198470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.198484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.202669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.202724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.202736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.206054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.206106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.206118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.209559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.209610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.209623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.213507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.213560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.213572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.217116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.217168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.217180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.221009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.221061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.221072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.225094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.225145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.225157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.228597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.228647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.228660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.232231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.232283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.232296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.235905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.235957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.235969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.239929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.239982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.312 [2024-12-06 14:40:59.239995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.312 [2024-12-06 14:40:59.245245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.312 [2024-12-06 14:40:59.245327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.245339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.313 [2024-12-06 14:40:59.251978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.313 [2024-12-06 14:40:59.252029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.252041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.313 [2024-12-06 14:40:59.257945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.313 [2024-12-06 14:40:59.257997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.258024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.313 [2024-12-06 14:40:59.262408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.313 [2024-12-06 14:40:59.262471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.262483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.313 [2024-12-06 14:40:59.267563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.313 [2024-12-06 14:40:59.267615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.267628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.313 [2024-12-06 14:40:59.272135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.313 [2024-12-06 14:40:59.272186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.272198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.313 [2024-12-06 14:40:59.277550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.313 [2024-12-06 14:40:59.277604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.313 [2024-12-06 14:40:59.277616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.573 [2024-12-06 14:40:59.282580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.573 [2024-12-06 14:40:59.282629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.573 [2024-12-06 14:40:59.282641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.573 [2024-12-06 14:40:59.287802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.573 [2024-12-06 14:40:59.287868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.573 [2024-12-06 14:40:59.287881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.573 [2024-12-06 14:40:59.293140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.573 [2024-12-06 14:40:59.293192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.573 [2024-12-06 14:40:59.293204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.573 [2024-12-06 14:40:59.298204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.573 [2024-12-06 14:40:59.298256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.573 [2024-12-06 14:40:59.298269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.573 [2024-12-06 14:40:59.302421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.302482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.302495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.307642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.307696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.307709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.312244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.312296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.312309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.317357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.317410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.317435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.322023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.322063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.322076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.326917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.326971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.326984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.331585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.331639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.331652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.336480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.336531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.336543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.341567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.341621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.341633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.346357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.346434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.351398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.351478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.351492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.356396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.356476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.361190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.361226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.361240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.365849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.365888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.365900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.371352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.371405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.371435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.376335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.376387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.376400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.381814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.381851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.381865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.386525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.386578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.386591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.391574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.391629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.391643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.395860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.395912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.395925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.400805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.400859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.400873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.405999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.406036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.406049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.411417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.411470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.411482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.415996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.416050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.416062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.420236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.420282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.420295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.424580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.424635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.424649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.574 [2024-12-06 14:40:59.428935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.574 [2024-12-06 14:40:59.428975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.574 [2024-12-06 14:40:59.428988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.433058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.433097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.433110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.438479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.438530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.438544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.443302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.443355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.443368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.448712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.448780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.448792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.454604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.454654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.454667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.460413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.460475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.460489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.465877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.465915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.465928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.471261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.471313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.471326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.476386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.476448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.476461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.480706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.480758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.480771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.485339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.485393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.485406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.489417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.489496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.489509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.494128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.494182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.494195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.497646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.497712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.497725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.502198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.502250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.502262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.505882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.505919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.505931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.510256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.510309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.510323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.514763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.514816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.514829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.518642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.518696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.518708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.522268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.522321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.522334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.526229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.526284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.529729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.529766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.529779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.533657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.533703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.533716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.575 [2024-12-06 14:40:59.538385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.575 [2024-12-06 14:40:59.538452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.575 [2024-12-06 14:40:59.538466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.542579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.542632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.542644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.546164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.546249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.546272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.550181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.550266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.550279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.554759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.554811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.554824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.559359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.559411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.559452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.563837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.563905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.563918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.567652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.567705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.567733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.571542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.571594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.571606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.575423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.575475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.575487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.580114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.580167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.580180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.584384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.584448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.584461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.587902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.587955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.587968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.591222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.591274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.591287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.595961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.596013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.596025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.600400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.600463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.600476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.604339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.604390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.604403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.608028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.608081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.608093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.611918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.611968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.611980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.616226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.616279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.616292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.620197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.620249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.620261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.624381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.624446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.624459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.629073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.629127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.629140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.632833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.632883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.632913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.637400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.637479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.637492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.836 [2024-12-06 14:40:59.642114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.836 [2024-12-06 14:40:59.642165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.836 [2024-12-06 14:40:59.642195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.646121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.646171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.646183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.650358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.650409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.650434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.654578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.654628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.654640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.658578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.658627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.658639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.662371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.662432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.662464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.666586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.666621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.666634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.670639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.670689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.670701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.675019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.675055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.675068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.679069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.679106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.679118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.684508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.684558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.684570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.688788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.688825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.688838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.691975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.692012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.692025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.695837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.695888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.695918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.700082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.700119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.700132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.703682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.703718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.703731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.710961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.711077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.717797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.717841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.717867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.723219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.723264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.723281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.728547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.728593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.728609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.733732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.733775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.733791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.739419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.739460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.739477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.744828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.744880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.744896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.749467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.749516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.749533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.755135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.755194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.755210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.760575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.760617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.760633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.765738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.765781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.765798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.770880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.837 [2024-12-06 14:40:59.770931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.837 [2024-12-06 14:40:59.770947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.837 [2024-12-06 14:40:59.776175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.838 [2024-12-06 14:40:59.776217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.838 [2024-12-06 14:40:59.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.838 [2024-12-06 14:40:59.781426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.838 [2024-12-06 14:40:59.781471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.838 [2024-12-06 14:40:59.781487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.838 [2024-12-06 14:40:59.786619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.838 [2024-12-06 14:40:59.786660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.838 [2024-12-06 14:40:59.786676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.838 [2024-12-06 14:40:59.791806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.838 [2024-12-06 14:40:59.791847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.838 [2024-12-06 14:40:59.791863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.838 [2024-12-06 14:40:59.797197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.838 [2024-12-06 14:40:59.797238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.838 [2024-12-06 14:40:59.797254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.838 [2024-12-06 14:40:59.802636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:52.838 [2024-12-06 14:40:59.802683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.838 [2024-12-06 14:40:59.802699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.100 [2024-12-06 14:40:59.808236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.100 [2024-12-06 14:40:59.808279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.100 [2024-12-06 14:40:59.808306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.100 [2024-12-06 14:40:59.813488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.100 [2024-12-06 14:40:59.813529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.813546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.818964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.819030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.824279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.824322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.824338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.829841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.829883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.829899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.835008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.835050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.840213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.840256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.845639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.845725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.850767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.850809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.850824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.856302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.856335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.856347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.859803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.859834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.859845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.863401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.863449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.863460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.867053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.867085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.867097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.870067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.870099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.870126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.873700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.873733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.873745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.877039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.877070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.877082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.880733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.880775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.884131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.884159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.884170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.887435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.887465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.887476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.890744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.890773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.893522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.893553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.893564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.896839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.896870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.896881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.900138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.900170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.900181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.903840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.903872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.903883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.907276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.907308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.907319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.910802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.910842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.914010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.914063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.914075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.917168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.917202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.101 [2024-12-06 14:40:59.917214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.101 [2024-12-06 14:40:59.920372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.101 [2024-12-06 14:40:59.920613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.920629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.924314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.924492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.924511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.927584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.927620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.927632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.930949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.930986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.930998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.933735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.933771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.933784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.936898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.936932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.936945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.940146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.940311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.940327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.943584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.943614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.943625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.946944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.946980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.946993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.949994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.950031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.950043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.953614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.953649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.953692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.956920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.956955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.956967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.959766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.959802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.959813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.963230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.963266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.963278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.966901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.966936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.966948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.970038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.970073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.970085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.973244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.973278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.973290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.976971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.977007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.977019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.980766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.980818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.980830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.983847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.983883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.983895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.987727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.987779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.987791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.991942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.991994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.992005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.996088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.996123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.996134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:40:59.999656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:40:59.999690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:40:59.999702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:41:00.003103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:41:00.003272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:41:00.003291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:41:00.006429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:41:00.006477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:41:00.006491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:41:00.009523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.102 [2024-12-06 14:41:00.009559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.102 [2024-12-06 14:41:00.009571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.102 [2024-12-06 14:41:00.013054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.013090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.013102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.016135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.016170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.016182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.019816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.019853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.019866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.023586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.023644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.023658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.027940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.027992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.028005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.032063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.032098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.032110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.035652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.035688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.035701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.039888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.039942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.039954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.043122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.043325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.043342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.047048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.047221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.047255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.051614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.051654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.051667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.054995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.055034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.055048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.058960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.059150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.059167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.103 [2024-12-06 14:41:00.063269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.103 [2024-12-06 14:41:00.063327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.103 [2024-12-06 14:41:00.063340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.067458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.067510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.071589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.071626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.071639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.075674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.075713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.075726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.079165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.079204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.079217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.082348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.082388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.082400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.085738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.085779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.085793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.088970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.089010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.089023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.092885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.093099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.093132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.096837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.096899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.096913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.100630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.100670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.100683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.104662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.104702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.104715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.108802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.108856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.108869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.112933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.113094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.113112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.116624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.116663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.116676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.120358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.120546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.120563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.124038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.391 [2024-12-06 14:41:00.124213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.391 [2024-12-06 14:41:00.124229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.391 [2024-12-06 14:41:00.128501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.128692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.128709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.131972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.132010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.132022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.135472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.135650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.135790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.138904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.139088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.139276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.142830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.143005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.143121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.146859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.147031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.147145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.150385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.150568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.150719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.154473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.154518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.154529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.157842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.158006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.158038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.161863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.161905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.161919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.165295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.165331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.165343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.168613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.168648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.168660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.172354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.172390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.172403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.176069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.176105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.176116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.179094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.179130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.179142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.182046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.182222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.182238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.184875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.184910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.184922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.188447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.188480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.188492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.192071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.192107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.192119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.195166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.195203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.195216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.199083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.199246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.199262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.202531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.202567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.202579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.205498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.205534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.205546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.209216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.209253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.209265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.212497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.212539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.212551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.215912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.392 [2024-12-06 14:41:00.215948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.392 [2024-12-06 14:41:00.215960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.392 [2024-12-06 14:41:00.219592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.219628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.219640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.223381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.223568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.223585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.227122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.227292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.227308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.230224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.230261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.230274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.233712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.233753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.233766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.237111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.237145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.237156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.239928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.239963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.239975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.243598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.243635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.243648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.247077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.247235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.247251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.250791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.250827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.250838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.254033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.254212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.254229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.257420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.257458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.257470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.261056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.261092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.261104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.264305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.264342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.264354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.267603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.267639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.267651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.271268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.271304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.271316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.275132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.275298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.275313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.278771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.278806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.278819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.282633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.282670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.282682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.286356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.286393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.286416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.289725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.289765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.289777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.292937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.292973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.292985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.296490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.296524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.296536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.299533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.299569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.299580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.302966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.303128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.303152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.306533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.393 [2024-12-06 14:41:00.306569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.393 [2024-12-06 14:41:00.306581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.393 [2024-12-06 14:41:00.309938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.309976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.310004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.313278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.313314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.313326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.316570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.316728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.320342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.320379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.320391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.323724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.323761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.323773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.327261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.327297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.327310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.330679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.330717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.330731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.334295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.334335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.334348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.337720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.337761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.337775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.394 [2024-12-06 14:41:00.341725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.394 [2024-12-06 14:41:00.341767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.394 [2024-12-06 14:41:00.341781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.345250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.345287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.345300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.348670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.348706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.348718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.351793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.351958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.351974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.355645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.355683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.355695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.358433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.358477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.358490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.361481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.361516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.361528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.364917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.364955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.364967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.368766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.655 [2024-12-06 14:41:00.368803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.655 [2024-12-06 14:41:00.368816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.655 [2024-12-06 14:41:00.371821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.371856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.371868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.374902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.375063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.378310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.378476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.382061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.382240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.382257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.385269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.385306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.385318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.388639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.388676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.388688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.392136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.392172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.392184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.395304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.395340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.395352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.398954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.398989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.399001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.402117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.402171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.402183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.405120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.405156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.405168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.408467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.408501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.408513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.412138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.412174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.412186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.415343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.415377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.415389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.418518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.418553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.418564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.421427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.421459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.421471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.424326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.424362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.424374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.427765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.427801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.427814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.431709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.431746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.431773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.434646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.434681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.434693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.438658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.438694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.438706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.442358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.442395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.442417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.445378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.445557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.445574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.448843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.448880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.448892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.452385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.452433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.452445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.455338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.455374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.455386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.656 [2024-12-06 14:41:00.459347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.656 [2024-12-06 14:41:00.459384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.656 [2024-12-06 14:41:00.459395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.462841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.462878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.462890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.466558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.466594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.466605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.470218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.470254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.470266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.473976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.474043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.474055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.477456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.477491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.477502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.480977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.481013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.481025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.484372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.484421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.484434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.487936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.487972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.487985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.491762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.491800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.491812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.494728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.494764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.494776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.498658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.498695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.498707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.502336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.502372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.502384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.505087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.505156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.505168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.508725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.508895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.509038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.512711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.512895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.513012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.516500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.516685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.516840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.520665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.520839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.520954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.523923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.524071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.524086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.527134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.527183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.530375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.530554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.530573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.533711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.533872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.534116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.537346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.537530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.537550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.541002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.541039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.541052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.544265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.544428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.544445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.547963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.547998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.548009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.551002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.657 [2024-12-06 14:41:00.551038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.657 [2024-12-06 14:41:00.551050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.657 [2024-12-06 14:41:00.554036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.554072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.554084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.557255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.557422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.560632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.560668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.560681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.563796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.563834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.563845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.567211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.567247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.567258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.570133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.570297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.570312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.573565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.573600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.573612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.576991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.577026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.577038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.579913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.579947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.579959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.583099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.583134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.583146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.586577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.586612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.586623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.589311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.589345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.592756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.592791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.592803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.596024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.596058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.596069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.598811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.598846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.598858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.601495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.601528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.601539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.604717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.604751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.604770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.607620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.607654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.607665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.611436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.611469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.611481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.614766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.614925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.614940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.658 [2024-12-06 14:41:00.618414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.658 [2024-12-06 14:41:00.618461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.658 [2024-12-06 14:41:00.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.918 [2024-12-06 14:41:00.622215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.918 [2024-12-06 14:41:00.622266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.622279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.626196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.626231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.626242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.629651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.629731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.629745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.633150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.633185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.633197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.636827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.636862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.636873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.639702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.639736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.639748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.642832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.642878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.642899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.645811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.645850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.645863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.648385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.648432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.648443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.651851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.651888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.651899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.654928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.654975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.657660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.657736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.657749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.660608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.660642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.660653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.663792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.663828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.663840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.666956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.667127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.667166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.670219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.670255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.670267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.673159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.673193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.673205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.676631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.676666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.676677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.679890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.679958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.679970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.683146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.683180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.683191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.686346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.686381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.686392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.689564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.689599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.689610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.693373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.693416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.693429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.696484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.696519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.696530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.699547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.699583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.699595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.702696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.702730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.702742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.705883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.919 [2024-12-06 14:41:00.705920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.919 [2024-12-06 14:41:00.705932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.919 [2024-12-06 14:41:00.708879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.708913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.708925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.711524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.711559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.711570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.714857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.714892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.714903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.717834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.717870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.717882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.721119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.721278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.721294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.724660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.724696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.724707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.728390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.728435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.728446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.730983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.731017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.731028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.734318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.734485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.734504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.737856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.737998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.738015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.741044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.741080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.741092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.744199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.744372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.744515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.746748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.746785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.746796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.750522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.750556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.750568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.753927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.753964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.753976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.757046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.757200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.757216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.760018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.760054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.760066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.763184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.763219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.763231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.765635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.765694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.765707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.768865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.769024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.769040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.772699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.772734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.772746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.775800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.775837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.775848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.778864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.778899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.778911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.782067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.782257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.782273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.785199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.785234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.785246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.788705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.788740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.788752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.792054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.920 [2024-12-06 14:41:00.792090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.920 [2024-12-06 14:41:00.792102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.920 [2024-12-06 14:41:00.794421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.794605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.794620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.797626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.797660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.797706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.800822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.800857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.800869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.803691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.803726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.803737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.807121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.807155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.807166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.810530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.810566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.810577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.813842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.813880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.813892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.816492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.816525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.816536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.819624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.819661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.819672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.822150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.822185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.822197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.825254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.825286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.825297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.828995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.829030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.829042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.832241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.832277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.832289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.835782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.835816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.835828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.839736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.839772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.839783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.843040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.843075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.843086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.846512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.846546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.846557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.849404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.849452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.849463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.852274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.852309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.852320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.855354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.855389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.855400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.858543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.858577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.858588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.861461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.861494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.861504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.921 [2024-12-06 14:41:00.864376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac27e0) 00:26:53.921 [2024-12-06 14:41:00.864416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.921 [2024-12-06 14:41:00.864429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.921 00:26:53.921 Latency(us) 00:26:53.921 [2024-12-06T14:41:00.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.921 [2024-12-06T14:41:00.891Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:53.921 nvme0n1 : 2.00 8072.74 1009.09 0.00 0.00 1978.46 536.20 12034.79 00:26:53.921 [2024-12-06T14:41:00.891Z] =================================================================================================================== 00:26:53.921 [2024-12-06T14:41:00.891Z] Total : 8072.74 1009.09 0.00 0.00 1978.46 536.20 12034.79 00:26:53.921 0 00:26:53.921 14:41:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:54.180 14:41:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:54.180 14:41:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:54.180 14:41:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:54.180 | .driver_specific 00:26:54.180 | .nvme_error 00:26:54.180 | .status_code 00:26:54.180 | .command_transient_transport_error' 00:26:54.439 14:41:01 -- host/digest.sh@71 -- # (( 521 > 0 )) 00:26:54.439 14:41:01 -- host/digest.sh@73 -- # killprocess 87692 00:26:54.439 14:41:01 -- common/autotest_common.sh@936 -- # '[' -z 87692 ']' 00:26:54.439 14:41:01 -- common/autotest_common.sh@940 -- # kill -0 87692 00:26:54.439 14:41:01 -- common/autotest_common.sh@941 -- # uname 00:26:54.439 14:41:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.439 14:41:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87692 00:26:54.439 killing process with pid 87692 00:26:54.439 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.439 00:26:54.439 Latency(us) 00:26:54.439 [2024-12-06T14:41:01.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.439 [2024-12-06T14:41:01.409Z] =================================================================================================================== 00:26:54.439 [2024-12-06T14:41:01.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.439 14:41:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:54.439 14:41:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:54.439 14:41:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87692' 00:26:54.439 14:41:01 -- common/autotest_common.sh@955 -- # kill 87692 00:26:54.439 14:41:01 -- common/autotest_common.sh@960 -- # wait 87692 00:26:54.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.698 14:41:01 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:54.698 14:41:01 -- host/digest.sh@54 -- # local rw bs qd 00:26:54.698 14:41:01 -- host/digest.sh@56 -- # rw=randwrite 00:26:54.698 14:41:01 -- host/digest.sh@56 -- # bs=4096 00:26:54.698 14:41:01 -- host/digest.sh@56 -- # qd=128 00:26:54.698 14:41:01 -- host/digest.sh@58 -- # bperfpid=87777 00:26:54.698 14:41:01 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:54.698 14:41:01 -- host/digest.sh@60 -- # waitforlisten 87777 /var/tmp/bperf.sock 00:26:54.698 14:41:01 -- common/autotest_common.sh@829 -- # '[' -z 87777 ']' 00:26:54.698 14:41:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.698 14:41:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.698 14:41:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.698 14:41:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.698 14:41:01 -- common/autotest_common.sh@10 -- # set +x 00:26:54.698 [2024-12-06 14:41:01.561376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.698 [2024-12-06 14:41:01.561672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87777 ] 00:26:54.958 [2024-12-06 14:41:01.695362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.958 [2024-12-06 14:41:01.769493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.895 14:41:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.895 14:41:02 -- common/autotest_common.sh@862 -- # return 0 00:26:55.895 14:41:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.895 14:41:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.895 14:41:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:55.895 14:41:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.895 14:41:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.895 14:41:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.895 14:41:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.895 14:41:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.464 nvme0n1 00:26:56.464 14:41:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:56.464 14:41:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.464 14:41:03 -- common/autotest_common.sh@10 -- # set +x 00:26:56.464 14:41:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.465 14:41:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:56.465 14:41:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.465 Running I/O for 2 seconds... 00:26:56.465 [2024-12-06 14:41:03.275092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f6890 00:26:56.465 [2024-12-06 14:41:03.275471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.275499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.285001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fd640 00:26:56.465 [2024-12-06 14:41:03.285464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.285491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.293823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e01f8 00:26:56.465 [2024-12-06 14:41:03.294298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.294324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.301582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f6cc8 00:26:56.465 [2024-12-06 14:41:03.301710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.301746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.311461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f96f8 00:26:56.465 [2024-12-06 14:41:03.311696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.311716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.321979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0ff8 00:26:56.465 [2024-12-06 14:41:03.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.322777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.329972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fd640 00:26:56.465 [2024-12-06 14:41:03.330902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.330930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.338821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f2948 00:26:56.465 [2024-12-06 14:41:03.340226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.340254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.347697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc560 00:26:56.465 [2024-12-06 14:41:03.348462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.348489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.356529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0bc0 00:26:56.465 [2024-12-06 14:41:03.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.358067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.365451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4f40 00:26:56.465 [2024-12-06 14:41:03.366760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.366787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.374518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e9168 00:26:56.465 [2024-12-06 14:41:03.374852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.374872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.385505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f6020 00:26:56.465 [2024-12-06 14:41:03.386446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.386481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.392108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f46d0 00:26:56.465 [2024-12-06 14:41:03.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.392257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.403592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6300 00:26:56.465 [2024-12-06 14:41:03.404227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.404253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.411628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f7da8 00:26:56.465 [2024-12-06 14:41:03.412148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.412176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.422040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fac10 00:26:56.465 [2024-12-06 14:41:03.422873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.422900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:56.465 [2024-12-06 14:41:03.431983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0bc0 00:26:56.465 [2024-12-06 14:41:03.432693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.465 [2024-12-06 14:41:03.432722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.441765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fac10 00:26:56.725 [2024-12-06 14:41:03.442508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.442534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.449732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5658 00:26:56.725 [2024-12-06 14:41:03.450780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.450806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.458845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5658 00:26:56.725 [2024-12-06 14:41:03.459822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.459847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.467819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5658 00:26:56.725 [2024-12-06 14:41:03.468862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.468888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.476763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ee190 00:26:56.725 [2024-12-06 14:41:03.477925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.477955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.485657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fb8b8 00:26:56.725 [2024-12-06 14:41:03.486115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.486158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.494820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0bc0 00:26:56.725 [2024-12-06 14:41:03.495201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.495226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.504328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fcdd0 00:26:56.725 [2024-12-06 14:41:03.505440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.505466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.513471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f3a28 00:26:56.725 [2024-12-06 14:41:03.514035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.514063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.522543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1ca0 00:26:56.725 [2024-12-06 14:41:03.523142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.725 [2024-12-06 14:41:03.523168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:56.725 [2024-12-06 14:41:03.531078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5658 00:26:56.725 [2024-12-06 14:41:03.532274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.532301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.539610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ef6a8 00:26:56.726 [2024-12-06 14:41:03.539788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.539812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.550863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ea248 00:26:56.726 [2024-12-06 14:41:03.551818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.551844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.557063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190efae0 00:26:56.726 [2024-12-06 14:41:03.557848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.557876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.566179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e0ea0 00:26:56.726 [2024-12-06 14:41:03.566450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.566469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.575258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190df988 00:26:56.726 [2024-12-06 14:41:03.575495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.575514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.584156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4298 00:26:56.726 [2024-12-06 14:41:03.584354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.584374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.594558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fda78 00:26:56.726 [2024-12-06 14:41:03.595925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.595953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.603518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e1710 00:26:56.726 [2024-12-06 14:41:03.604894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.604921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.612365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f7970 00:26:56.726 [2024-12-06 14:41:03.613590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.613616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.621356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eb760 00:26:56.726 [2024-12-06 14:41:03.622604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.622632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.629373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5220 00:26:56.726 [2024-12-06 14:41:03.630462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.630487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.638522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ee5c8 00:26:56.726 [2024-12-06 14:41:03.638942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.638966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.647545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eff18 00:26:56.726 [2024-12-06 14:41:03.648035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.648060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.656023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1ca0 00:26:56.726 [2024-12-06 14:41:03.656767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.656794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.664316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0788 00:26:56.726 [2024-12-06 14:41:03.664911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.664939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.674913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fda78 00:26:56.726 [2024-12-06 14:41:03.675509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.675535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:56.726 [2024-12-06 14:41:03.683902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fa3a0 00:26:56.726 [2024-12-06 14:41:03.684996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.726 [2024-12-06 14:41:03.685024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.694072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc560 00:26:56.986 [2024-12-06 14:41:03.695259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.695286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.705184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ee5c8 00:26:56.986 [2024-12-06 14:41:03.706053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.706080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.713317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f3e60 00:26:56.986 [2024-12-06 14:41:03.714318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.714344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.722314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e9168 00:26:56.986 [2024-12-06 14:41:03.723717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.723743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.731303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f92c0 00:26:56.986 [2024-12-06 14:41:03.732118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.732145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.740300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6300 00:26:56.986 [2024-12-06 14:41:03.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.741856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.748391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ed4e8 00:26:56.986 [2024-12-06 14:41:03.749176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.749201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.757195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e01f8 00:26:56.986 [2024-12-06 14:41:03.757834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.757862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.767835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190dfdc0 00:26:56.986 [2024-12-06 14:41:03.768434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.768460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.776295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ec840 00:26:56.986 [2024-12-06 14:41:03.777018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.986 [2024-12-06 14:41:03.777045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.986 [2024-12-06 14:41:03.785201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f81e0 00:26:56.986 [2024-12-06 14:41:03.785708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.785733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.794138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5a90 00:26:56.987 [2024-12-06 14:41:03.794562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.794586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.802956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190efae0 00:26:56.987 [2024-12-06 14:41:03.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.803389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.811762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f57b0 00:26:56.987 [2024-12-06 14:41:03.812173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.812197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.822513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fdeb0 00:26:56.987 [2024-12-06 14:41:03.823801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.823828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.831561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f2d80 00:26:56.987 [2024-12-06 14:41:03.832863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.832889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.840443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fbcf0 00:26:56.987 [2024-12-06 14:41:03.841513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.841543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.849296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e95a0 00:26:56.987 [2024-12-06 14:41:03.850465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.850503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.858267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1ca0 00:26:56.987 [2024-12-06 14:41:03.859550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.859575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.867059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ebfd0 00:26:56.987 [2024-12-06 14:41:03.868499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.868524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.875831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f81e0 00:26:56.987 [2024-12-06 14:41:03.877236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.877261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.884677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5a90 00:26:56.987 [2024-12-06 14:41:03.885938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.885966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.892292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc998 00:26:56.987 [2024-12-06 14:41:03.893127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.893153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.901108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ebfd0 00:26:56.987 [2024-12-06 14:41:03.902506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.902533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.910002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e3d08 00:26:56.987 [2024-12-06 14:41:03.910746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.910772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.918379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f2d80 00:26:56.987 [2024-12-06 14:41:03.918459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.927159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f6458 00:26:56.987 [2024-12-06 14:41:03.927219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.927237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.937461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0ff8 00:26:56.987 [2024-12-06 14:41:03.938472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.938498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.987 [2024-12-06 14:41:03.947037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5ec8 00:26:56.987 [2024-12-06 14:41:03.947714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.987 [2024-12-06 14:41:03.947740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:03.955116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f57b0 00:26:57.247 [2024-12-06 14:41:03.956327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:03.956353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:03.966169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ea680 00:26:57.247 [2024-12-06 14:41:03.966743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:03.966769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:03.973895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e9e10 00:26:57.247 [2024-12-06 14:41:03.974620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:03.974646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:03.984616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ea248 00:26:57.247 [2024-12-06 14:41:03.985429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:03.985455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:03.991220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fbcf0 00:26:57.247 [2024-12-06 14:41:03.991304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:03.991322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.000991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f9f68 00:26:57.247 [2024-12-06 14:41:04.001209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.001244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.009861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0bc0 00:26:57.247 [2024-12-06 14:41:04.010919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.010946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.020464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc560 00:26:57.247 [2024-12-06 14:41:04.021178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.021204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.028279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fb480 00:26:57.247 [2024-12-06 14:41:04.029052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.029078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.037592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e8088 00:26:57.247 [2024-12-06 14:41:04.037979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.038003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.046691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4298 00:26:57.247 [2024-12-06 14:41:04.047204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.047231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.055714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f57b0 00:26:57.247 [2024-12-06 14:41:04.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.056590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.063997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f57b0 00:26:57.247 [2024-12-06 14:41:04.064766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.064793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.072908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f7100 00:26:57.247 [2024-12-06 14:41:04.073901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.073929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.081835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ebfd0 00:26:57.247 [2024-12-06 14:41:04.082673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.082699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.091649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1430 00:26:57.247 [2024-12-06 14:41:04.092289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.092318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.100956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ed4e8 00:26:57.247 [2024-12-06 14:41:04.101414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.101452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.109519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fa3a0 00:26:57.247 [2024-12-06 14:41:04.110144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.110172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.118534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5ec8 00:26:57.247 [2024-12-06 14:41:04.119286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.119313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.128634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ea248 00:26:57.247 [2024-12-06 14:41:04.129407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.129441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.136530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6738 00:26:57.247 [2024-12-06 14:41:04.137407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.247 [2024-12-06 14:41:04.137446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:57.247 [2024-12-06 14:41:04.145309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4298 00:26:57.247 [2024-12-06 14:41:04.146494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.146519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.155632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0bc0 00:26:57.248 [2024-12-06 14:41:04.157117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.157144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.163635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190df550 00:26:57.248 [2024-12-06 14:41:04.164694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.164720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.172999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e99d8 00:26:57.248 [2024-12-06 14:41:04.173438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.173465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.182753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fef90 00:26:57.248 [2024-12-06 14:41:04.183217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.183244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.191936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e0a68 00:26:57.248 [2024-12-06 14:41:04.192655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.192683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.202382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e0a68 00:26:57.248 [2024-12-06 14:41:04.203066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.203092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:57.248 [2024-12-06 14:41:04.210874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190edd58 00:26:57.248 [2024-12-06 14:41:04.212108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.248 [2024-12-06 14:41:04.212135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.506 [2024-12-06 14:41:04.221017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6fa8 00:26:57.506 [2024-12-06 14:41:04.222228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.506 [2024-12-06 14:41:04.222255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:57.506 [2024-12-06 14:41:04.229806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5a90 00:26:57.506 [2024-12-06 14:41:04.230209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.506 [2024-12-06 14:41:04.230237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:57.506 [2024-12-06 14:41:04.239215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ea248 00:26:57.506 [2024-12-06 14:41:04.240100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.506 [2024-12-06 14:41:04.240127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.506 [2024-12-06 14:41:04.248438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190df550 00:26:57.506 [2024-12-06 14:41:04.248943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.506 [2024-12-06 14:41:04.248970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:57.506 [2024-12-06 14:41:04.257551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e73e0 00:26:57.506 [2024-12-06 14:41:04.258141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.506 [2024-12-06 14:41:04.258169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:57.506 [2024-12-06 14:41:04.266945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fb8b8 00:26:57.506 [2024-12-06 14:41:04.267517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.267544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.276134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5ec8 00:26:57.507 [2024-12-06 14:41:04.276641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.276666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.285017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f57b0 00:26:57.507 [2024-12-06 14:41:04.285506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.285530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.294581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190de038 00:26:57.507 [2024-12-06 14:41:04.295241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.295269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.303514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f92c0 00:26:57.507 [2024-12-06 14:41:04.304042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.304068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.312365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4f40 00:26:57.507 [2024-12-06 14:41:04.312885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.312912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.321224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fe720 00:26:57.507 [2024-12-06 14:41:04.321783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.321812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.330245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6738 00:26:57.507 [2024-12-06 14:41:04.330718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.330749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.338596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f3e60 00:26:57.507 [2024-12-06 14:41:04.339521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.339547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.347991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e23b8 00:26:57.507 [2024-12-06 14:41:04.348471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.348498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.356819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5220 00:26:57.507 [2024-12-06 14:41:04.357261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.357295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.365732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fbcf0 00:26:57.507 [2024-12-06 14:41:04.366245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.366272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.374288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ec408 00:26:57.507 [2024-12-06 14:41:04.375184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.375210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.383041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f2d80 00:26:57.507 [2024-12-06 14:41:04.384131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.384157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.393308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0350 00:26:57.507 [2024-12-06 14:41:04.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.394695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.401560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e73e0 00:26:57.507 [2024-12-06 14:41:04.402719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.402745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.410635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f9f68 00:26:57.507 [2024-12-06 14:41:04.411159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.411185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.419778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f8618 00:26:57.507 [2024-12-06 14:41:04.420375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.420401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.427516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6738 00:26:57.507 [2024-12-06 14:41:04.427707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.427725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.438557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eea00 00:26:57.507 [2024-12-06 14:41:04.439264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.439291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.446296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f46d0 00:26:57.507 [2024-12-06 14:41:04.447131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.447157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.507 [2024-12-06 14:41:04.455126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eaab8 00:26:57.507 [2024-12-06 14:41:04.455359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.507 [2024-12-06 14:41:04.455377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:57.508 [2024-12-06 14:41:04.463987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e9e10 00:26:57.508 [2024-12-06 14:41:04.464215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.508 [2024-12-06 14:41:04.464239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:57.508 [2024-12-06 14:41:04.473272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eee38 00:26:57.508 [2024-12-06 14:41:04.473582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.508 [2024-12-06 14:41:04.473621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.483270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fa3a0 00:26:57.766 [2024-12-06 14:41:04.484280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.484307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.493051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ddc00 00:26:57.766 [2024-12-06 14:41:04.494211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.494239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.501917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ec408 00:26:57.766 [2024-12-06 14:41:04.502491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.502516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.510577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e8088 00:26:57.766 [2024-12-06 14:41:04.511707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.511734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.519088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190dece0 00:26:57.766 [2024-12-06 14:41:04.519215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.519234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.528004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eee38 00:26:57.766 [2024-12-06 14:41:04.528113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.528132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.539296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eea00 00:26:57.766 [2024-12-06 14:41:04.540191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.540217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.545817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f7100 00:26:57.766 [2024-12-06 14:41:04.545986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.546005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.555499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f46d0 00:26:57.766 [2024-12-06 14:41:04.556608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.766 [2024-12-06 14:41:04.556635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:57.766 [2024-12-06 14:41:04.565831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4b08 00:26:57.766 [2024-12-06 14:41:04.567338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.567365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.573783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4f40 00:26:57.767 [2024-12-06 14:41:04.574837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.574863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.582845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f92c0 00:26:57.767 [2024-12-06 14:41:04.583311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.583339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.592637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4f40 00:26:57.767 [2024-12-06 14:41:04.593208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.593233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.601552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190efae0 00:26:57.767 [2024-12-06 14:41:04.602174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.602200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.609237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e7818 00:26:57.767 [2024-12-06 14:41:04.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.609989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.619291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc560 00:26:57.767 [2024-12-06 14:41:04.619856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.619884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.627455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fd640 00:26:57.767 [2024-12-06 14:41:04.627966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.627993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.635384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ed4e8 00:26:57.767 [2024-12-06 14:41:04.635470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.646241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e99d8 00:26:57.767 [2024-12-06 14:41:04.646722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.646748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.655079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eaab8 00:26:57.767 [2024-12-06 14:41:04.655557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.655583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.663955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fda78 00:26:57.767 [2024-12-06 14:41:04.664444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.664469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.673172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eb328 00:26:57.767 [2024-12-06 14:41:04.674055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.674083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.680603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e88f8 00:26:57.767 [2024-12-06 14:41:04.681525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.681551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.691230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1868 00:26:57.767 [2024-12-06 14:41:04.691825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.691851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.699609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fac10 00:26:57.767 [2024-12-06 14:41:04.700560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.700585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.708658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e3d08 00:26:57.767 [2024-12-06 14:41:04.709029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.719066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ecc78 00:26:57.767 [2024-12-06 14:41:04.719830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.719856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:57.767 [2024-12-06 14:41:04.726917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ec408 00:26:57.767 [2024-12-06 14:41:04.727810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.767 [2024-12-06 14:41:04.727838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.736738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc998 00:26:58.026 [2024-12-06 14:41:04.738008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.738082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.747816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f92c0 00:26:58.026 [2024-12-06 14:41:04.748486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.748512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.756920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ed0b0 00:26:58.026 [2024-12-06 14:41:04.758308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.758335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.766885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0bc0 00:26:58.026 [2024-12-06 14:41:04.767717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.767759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.776010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1ca0 00:26:58.026 [2024-12-06 14:41:04.776673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.776698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.784201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f8e88 00:26:58.026 [2024-12-06 14:41:04.785275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.785303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.792741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e9168 00:26:58.026 [2024-12-06 14:41:04.793007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.793031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.803966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4298 00:26:58.026 [2024-12-06 14:41:04.805000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.805027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.810528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e4140 00:26:58.026 [2024-12-06 14:41:04.810829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.810857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.821537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e01f8 00:26:58.026 [2024-12-06 14:41:04.822410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.822446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.828078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e1b48 00:26:58.026 [2024-12-06 14:41:04.828171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.828190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.838837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eff18 00:26:58.026 [2024-12-06 14:41:04.840127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.026 [2024-12-06 14:41:04.840155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:58.026 [2024-12-06 14:41:04.847460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f6458 00:26:58.027 [2024-12-06 14:41:04.847999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.848025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.856045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190df550 00:26:58.027 [2024-12-06 14:41:04.856836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.856862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.864351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e9168 00:26:58.027 [2024-12-06 14:41:04.864987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.865014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.874183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f3a28 00:26:58.027 [2024-12-06 14:41:04.875084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.875111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.884817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc128 00:26:58.027 [2024-12-06 14:41:04.885739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.885766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.893804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e12d8 00:26:58.027 [2024-12-06 14:41:04.895126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.895153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.901188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fef90 00:26:58.027 [2024-12-06 14:41:04.902012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.902040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.911512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e73e0 00:26:58.027 [2024-12-06 14:41:04.912380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.912413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.919255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190de038 00:26:58.027 [2024-12-06 14:41:04.920264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.920291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.928242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f96f8 00:26:58.027 [2024-12-06 14:41:04.928776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.928802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.937387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ea248 00:26:58.027 [2024-12-06 14:41:04.938034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.938062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.945969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190efae0 00:26:58.027 [2024-12-06 14:41:04.947172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.947199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.954551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190dece0 00:26:58.027 [2024-12-06 14:41:04.954726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.954744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.965774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ed4e8 00:26:58.027 [2024-12-06 14:41:04.966765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.966790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.972284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eb760 00:26:58.027 [2024-12-06 14:41:04.972517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.972536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.982273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e1710 00:26:58.027 [2024-12-06 14:41:04.982645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.982669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:58.027 [2024-12-06 14:41:04.991764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190dece0 00:26:58.027 [2024-12-06 14:41:04.992700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.027 [2024-12-06 14:41:04.992729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.000739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f6020 00:26:58.285 [2024-12-06 14:41:05.000977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.001000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.011784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190dece0 00:26:58.285 [2024-12-06 14:41:05.012416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.012450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.020803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5a90 00:26:58.285 [2024-12-06 14:41:05.022173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.022201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.028669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4b08 00:26:58.285 [2024-12-06 14:41:05.029390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.029426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.037490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fa3a0 00:26:58.285 [2024-12-06 14:41:05.038574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.038602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.046500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f7970 00:26:58.285 [2024-12-06 14:41:05.047494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.047519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.055324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f4b08 00:26:58.285 [2024-12-06 14:41:05.056372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.056398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.064147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190eaef0 00:26:58.285 [2024-12-06 14:41:05.065263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.065290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.072676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e99d8 00:26:58.285 [2024-12-06 14:41:05.073737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.073765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.083303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e8088 00:26:58.285 [2024-12-06 14:41:05.084010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.084036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.091236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f1430 00:26:58.285 [2024-12-06 14:41:05.091815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.091842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.100159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ed920 00:26:58.285 [2024-12-06 14:41:05.100966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.109002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ebfd0 00:26:58.285 [2024-12-06 14:41:05.110137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.110165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.117954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f9f68 00:26:58.285 [2024-12-06 14:41:05.118597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.118624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.126695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ef270 00:26:58.285 [2024-12-06 14:41:05.127567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.127594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.137513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f31b8 00:26:58.285 [2024-12-06 14:41:05.138510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.138536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.145436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fd208 00:26:58.285 [2024-12-06 14:41:05.146393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.146432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.154434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e5ec8 00:26:58.285 [2024-12-06 14:41:05.155627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.155654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.163287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f8e88 00:26:58.285 [2024-12-06 14:41:05.164102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.164129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.172179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190de8a8 00:26:58.285 [2024-12-06 14:41:05.172755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.172782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.180993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f0788 00:26:58.285 [2024-12-06 14:41:05.181517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.181542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.190021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fc560 00:26:58.285 [2024-12-06 14:41:05.190588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.285 [2024-12-06 14:41:05.190614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:58.285 [2024-12-06 14:41:05.199284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190ee190 00:26:58.286 [2024-12-06 14:41:05.199903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.286 [2024-12-06 14:41:05.199929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:58.286 [2024-12-06 14:41:05.208577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190f81e0 00:26:58.286 [2024-12-06 14:41:05.209086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.286 [2024-12-06 14:41:05.209118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:58.286 [2024-12-06 14:41:05.217884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e99d8 00:26:58.286 [2024-12-06 14:41:05.218965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.286 [2024-12-06 14:41:05.218991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:58.286 [2024-12-06 14:41:05.226960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e6fa8 00:26:58.286 [2024-12-06 14:41:05.227415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.286 [2024-12-06 14:41:05.227437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:58.286 [2024-12-06 14:41:05.236385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e99d8 00:26:58.286 [2024-12-06 14:41:05.237492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.286 [2024-12-06 14:41:05.237518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:58.286 [2024-12-06 14:41:05.245708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190e3498 00:26:58.286 [2024-12-06 14:41:05.246434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.286 [2024-12-06 14:41:05.246466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:58.543 [2024-12-06 14:41:05.254301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74b8f0) with pdu=0x2000190fdeb0 00:26:58.543 [2024-12-06 14:41:05.255068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.543 [2024-12-06 14:41:05.255097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:58.543 00:26:58.543 Latency(us) 00:26:58.543 [2024-12-06T14:41:05.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.543 [2024-12-06T14:41:05.513Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.543 nvme0n1 : 2.00 27970.21 109.26 0.00 0.00 4571.58 1809.69 13941.29 00:26:58.543 [2024-12-06T14:41:05.513Z] =================================================================================================================== 00:26:58.543 [2024-12-06T14:41:05.513Z] Total : 27970.21 109.26 0.00 0.00 4571.58 1809.69 13941.29 00:26:58.543 0 00:26:58.543 14:41:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:58.543 14:41:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:58.543 14:41:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:58.543 14:41:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:58.543 | .driver_specific 00:26:58.543 | .nvme_error 00:26:58.543 | .status_code 00:26:58.543 | .command_transient_transport_error' 00:26:58.801 14:41:05 -- host/digest.sh@71 -- # (( 219 > 0 )) 00:26:58.801 14:41:05 -- host/digest.sh@73 -- # killprocess 87777 00:26:58.801 14:41:05 -- common/autotest_common.sh@936 -- # '[' -z 87777 ']' 00:26:58.801 14:41:05 -- common/autotest_common.sh@940 -- # kill -0 87777 00:26:58.801 14:41:05 -- common/autotest_common.sh@941 -- # uname 00:26:58.801 14:41:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:58.801 14:41:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87777 00:26:58.801 14:41:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:58.801 14:41:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:58.801 killing process with pid 87777 00:26:58.801 14:41:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87777' 00:26:58.801 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.801 00:26:58.801 Latency(us) 00:26:58.801 [2024-12-06T14:41:05.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.801 [2024-12-06T14:41:05.771Z] =================================================================================================================== 00:26:58.801 [2024-12-06T14:41:05.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.801 14:41:05 -- common/autotest_common.sh@955 -- # kill 87777 00:26:58.801 14:41:05 -- common/autotest_common.sh@960 -- # wait 87777 00:26:59.058 14:41:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:59.058 14:41:05 -- host/digest.sh@54 -- # local rw bs qd 00:26:59.058 14:41:05 -- host/digest.sh@56 -- # rw=randwrite 00:26:59.058 14:41:05 -- host/digest.sh@56 -- # bs=131072 00:26:59.058 14:41:05 -- host/digest.sh@56 -- # qd=16 00:26:59.058 14:41:05 -- host/digest.sh@58 -- # bperfpid=87867 00:26:59.058 14:41:05 -- host/digest.sh@60 -- # waitforlisten 87867 /var/tmp/bperf.sock 00:26:59.058 14:41:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:59.058 14:41:05 -- common/autotest_common.sh@829 -- # '[' -z 87867 ']' 00:26:59.058 14:41:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.058 14:41:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.058 14:41:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.058 14:41:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.058 14:41:05 -- common/autotest_common.sh@10 -- # set +x 00:26:59.058 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.058 Zero copy mechanism will not be used. 00:26:59.058 [2024-12-06 14:41:06.002062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:59.058 [2024-12-06 14:41:06.002170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87867 ] 00:26:59.317 [2024-12-06 14:41:06.140714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.317 [2024-12-06 14:41:06.231265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.249 14:41:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.249 14:41:06 -- common/autotest_common.sh@862 -- # return 0 00:27:00.249 14:41:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.249 14:41:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.249 14:41:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:00.249 14:41:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.249 14:41:07 -- common/autotest_common.sh@10 -- # set +x 00:27:00.249 14:41:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.249 14:41:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.249 14:41:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.506 nvme0n1 00:27:00.763 14:41:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:00.763 14:41:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.763 14:41:07 -- common/autotest_common.sh@10 -- # set +x 00:27:00.763 14:41:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.763 14:41:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:00.763 14:41:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.763 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.763 Zero copy mechanism will not be used. 00:27:00.763 Running I/O for 2 seconds... 00:27:00.763 [2024-12-06 14:41:07.626954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.627194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.627224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.631027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.631149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.631172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.634795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.634899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.634921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.638431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.638534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.638555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.642202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.642280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.645898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.645978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.646000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.649569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.649725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.649747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.653374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.653571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.653592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.657186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.657374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.657400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.661027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.661159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.664866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.664962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.664982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.668671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.668767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.668787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.672455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.672548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.672568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.676271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.676374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.676394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.680073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.680179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.680199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.683880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.684075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.684096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.687638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.687798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.687818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.691828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.691954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.691976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.696130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.696228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.696251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.700142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.763 [2024-12-06 14:41:07.700232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.763 [2024-12-06 14:41:07.700253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.763 [2024-12-06 14:41:07.704010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.704111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.704131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.707823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.707936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.707957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.711644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.711762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.711784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.715547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.715724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.715750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.719278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.719492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.719514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.723187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.723290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.723311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.726992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.727082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.727103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.764 [2024-12-06 14:41:07.731352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:00.764 [2024-12-06 14:41:07.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.764 [2024-12-06 14:41:07.731476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.735725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.735843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.735871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.739657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.739767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.739787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.743660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.743766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.743787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.747591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.747769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.747789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.751461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.751677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.751697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.755364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.755483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.755504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.759260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.759337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.759357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.763140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.763246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.763266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.766963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.767039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.767058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.770804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.770926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.774650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.774757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.774777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.778505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.778683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.778703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.782272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.782481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.782501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.786063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.786203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.786226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.789829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.789934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.789955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.793594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.793742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.793763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.797355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.797442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.797463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.801085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.801189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.801209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.039 [2024-12-06 14:41:07.804802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.039 [2024-12-06 14:41:07.804910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.039 [2024-12-06 14:41:07.804931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.040 [2024-12-06 14:41:07.808576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.040 [2024-12-06 14:41:07.808755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.040 [2024-12-06 14:41:07.808787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.040 [2024-12-06 14:41:07.812258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.040 [2024-12-06 14:41:07.812451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.040 [2024-12-06 14:41:07.812473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.040 [2024-12-06 14:41:07.816042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.040 [2024-12-06 14:41:07.816146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.040 [2024-12-06 14:41:07.816166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.040 [2024-12-06 14:41:07.819835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.040 [2024-12-06 14:41:07.819924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.040 [2024-12-06 14:41:07.819945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.040 [2024-12-06 14:41:07.823568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.040 [2024-12-06 14:41:07.823665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.823685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.827395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.827485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.827505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.831110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.831211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.831231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.834888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.834993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.835014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.838662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.838836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.838857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.842455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.842627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.842647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.846228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.846331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.846351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.850058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.850162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.850182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.853762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.853862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.853882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.857520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.857597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.857617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.861255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.861366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.861386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.865028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.865133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.868807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.868983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.869003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.872510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.872660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.872680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.876255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.876375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.876397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.879981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.880077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.880097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.883710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.883787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.883807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.887394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.887481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.887501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.891174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.891278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.891314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.894943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.895048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.895069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.898795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.898968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.898988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.902517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.902684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.902704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.906273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.906376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.906396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.910022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.910124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.910144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.041 [2024-12-06 14:41:07.913729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.041 [2024-12-06 14:41:07.913840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.041 [2024-12-06 14:41:07.913861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.917446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.917524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.917544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.921122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.921221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.921241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.924874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.924979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.924999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.928648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.928824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.928845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.932381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.932581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.932601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.936092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.936203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.936223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.939785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.939888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.939908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.943519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.943614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.943634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.947244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.947340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.947361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.950995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.951104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.951124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.954728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.954832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.954852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.958525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.958703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.958724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.962166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.962362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.962382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.965918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.966037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.966057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.969632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.969743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.969764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.973751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.973856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.977733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.977814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.977835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.981466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.981576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.981595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.985197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.985304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.985323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.989225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.989403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.989437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.993131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.993326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:07.997138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:07.997277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:07.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:08.001057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:08.001152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:08.001173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.042 [2024-12-06 14:41:08.005398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.042 [2024-12-06 14:41:08.005543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.042 [2024-12-06 14:41:08.005589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.303 [2024-12-06 14:41:08.009934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.303 [2024-12-06 14:41:08.010049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.303 [2024-12-06 14:41:08.010070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.303 [2024-12-06 14:41:08.014362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.303 [2024-12-06 14:41:08.014559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.303 [2024-12-06 14:41:08.014582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.303 [2024-12-06 14:41:08.018668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.303 [2024-12-06 14:41:08.018828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.303 [2024-12-06 14:41:08.018849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.303 [2024-12-06 14:41:08.022854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.303 [2024-12-06 14:41:08.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.303 [2024-12-06 14:41:08.023055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.303 [2024-12-06 14:41:08.026803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.026966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.026986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.030617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.030737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.030758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.034390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.034515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.034535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.038197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.038274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.038294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.042038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.042122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.042158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.045887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.046044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.046065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.049783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.049914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.049935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.053583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.053806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.053827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.057367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.057560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.057581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.061244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.061364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.061384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.065033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.065121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.065141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.068829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.068905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.068925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.072564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.072654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.072675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.076299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.076433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.080037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.080145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.080165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.083924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.084105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.084125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.087727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.087918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.087939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.091482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.091609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.091629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.095237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.095324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.095344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.099010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.099096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.099116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.102798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.102891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.102912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.106508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.106630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.106650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.110257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.110362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.110383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.114160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.114335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.114356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.117977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.118162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.118183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.121738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.121854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.304 [2024-12-06 14:41:08.121875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.304 [2024-12-06 14:41:08.125507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.304 [2024-12-06 14:41:08.125594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.125614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.129196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.129291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.129311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.132990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.133087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.133108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.136836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.136938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.136958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.140561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.140669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.140689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.144371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.144561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.144582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.148163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.148338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.148358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.152060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.152183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.152203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.155936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.156024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.156044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.159649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.159727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.159747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.163362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.163464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.163484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.167119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.167226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.170989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.171096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.174853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.175029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.175049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.178591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.178766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.178786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.182365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.182501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.182522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.186150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.186244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.186264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.189984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.190066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.190087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.193792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.193882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.193903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.197582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.197728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.197765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.201372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.201493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.201513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.205183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.205360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.205379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.209005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.209168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.209188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.212773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.212888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.212907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.216668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.216809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.216831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.221021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.221116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.221152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.225219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.225296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.305 [2024-12-06 14:41:08.225316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.305 [2024-12-06 14:41:08.229116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.305 [2024-12-06 14:41:08.229219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.229240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.232916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.233020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.233040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.236695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.236869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.236889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.240438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.240626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.240646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.244390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.244531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.244552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.248279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.248380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.248401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.252208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.252299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.252320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.256276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.256363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.256383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.260412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.260565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.260586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.264565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.264676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.264697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.306 [2024-12-06 14:41:08.269431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.306 [2024-12-06 14:41:08.269732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.306 [2024-12-06 14:41:08.269774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.274107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.274288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.278205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.278310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.278330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.282194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.282315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.282337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.286292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.286373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.286393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.290012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.290141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.290160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.293805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.293922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.293943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.297551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.297653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.297716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.301353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.301541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.301561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.305157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.305321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.305341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.308968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.309083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.312690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.312773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.312792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.316422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.316510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.316530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.320112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.320197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.320218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.323849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.323954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.323974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.327599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.327704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.327724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.331385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.331575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.331596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.335099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.335279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.335299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.338954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.339066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.339086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.342746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.342852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.346474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.346550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.346570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.350189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.573 [2024-12-06 14:41:08.350280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.573 [2024-12-06 14:41:08.350300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.573 [2024-12-06 14:41:08.353857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.353983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.354004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.357602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.357735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.357757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.361424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.361613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.361634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.365169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.365352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.365372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.368956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.369078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.369099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.372682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.372769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.376356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.376472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.376492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.380073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.380169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.380189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.383855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.383956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.383976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.387604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.387709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.387730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.391394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.391586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.391607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.395189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.395388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.395420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.399015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.399137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.399159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.402813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.402908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.402928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.406569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.406664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.410301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.410400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.410420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.414007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.414130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.414149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.417714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.417828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.417850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.421514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.421720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.421756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.425243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.425459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.425480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.428954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.429064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.429084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.432677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.432773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.432792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.436427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.436522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.436542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.440119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.440213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.440233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.443880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.443988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.444008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.447636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.447741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.447761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.574 [2024-12-06 14:41:08.451372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.574 [2024-12-06 14:41:08.451560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.574 [2024-12-06 14:41:08.451581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.455141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.455325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.455345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.458925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.459057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.459079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.462719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.462806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.462827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.466461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.466557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.466577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.470185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.470266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.470286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.473876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.473982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.474003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.477571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.477717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.477738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.481394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.481584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.481604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.485075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.485296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.488819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.488897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.492599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.492687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.492708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.496366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.496477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.496497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.500136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.500213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.500234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.503949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.504052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.504072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.507726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.507831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.507851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.511456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.511632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.511653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.515125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.515306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.515326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.518889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.519000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.519020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.522590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.522665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.522685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.526278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.526356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.526376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.529935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.530019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.530040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.533642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.533793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.533813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.575 [2024-12-06 14:41:08.537751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.575 [2024-12-06 14:41:08.537887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.575 [2024-12-06 14:41:08.537909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.542190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.542366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.542386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.546008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.546253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.546274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.550136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.550248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.550267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.553887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.553983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.554019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.557586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.557739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.561312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.561440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.565122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.565223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.565245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.568904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.569011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.569033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.572680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.572856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.572877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.576456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.576657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.576678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.580170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.580265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.580284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.583877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.583970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.583991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.587601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.587684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.587704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.591263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.591338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.591359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.595008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.595110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.595131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.598782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.598887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.598907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.602602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.602784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.602804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.606361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.606562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.610118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.610225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.610245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.613832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.613934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.613955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.617550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.617650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.617697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.621314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.621402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.621433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.625063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.625165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.625185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.836 [2024-12-06 14:41:08.628736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.836 [2024-12-06 14:41:08.628842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.836 [2024-12-06 14:41:08.628862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.632559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.632732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.632752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.636214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.636397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.636431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.639972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.640090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.640110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.643712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.643805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.643825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.647422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.647499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.647519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.651126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.651201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.651220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.654885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.654987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.655007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.658630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.658739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.658760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.662388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.662577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.662597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.666097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.666278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.666298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.670040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.670161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.670182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.673758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.673912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.673934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.677493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.677574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.677594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.681197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.681291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.681311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.684943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.685044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.685064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.688710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.688815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.688835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.692557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.692736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.692756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.696350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.696521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.696542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.700066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.700168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.700188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.703840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.703958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.707595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.707690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.707710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.711281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.711357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.711377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.715015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.715123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.715144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.718784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.718912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.722641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.722825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.722851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.726311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.837 [2024-12-06 14:41:08.726525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.837 [2024-12-06 14:41:08.726546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.837 [2024-12-06 14:41:08.730151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.730256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.730277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.733903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.733993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.734014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.737544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.737620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.737639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.741346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.741453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.745142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.745251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.745272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.748943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.749049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.749069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.752731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.752907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.752927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.756492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.756667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.756687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.760238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.760377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.760397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.764002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.764099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.764118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.767725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.767824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.767844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.771442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.771526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.771546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.775189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.775292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.775312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.778975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.779081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.779102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.782811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.782988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.783008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.786541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.786706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.786726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.790307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.790410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.790431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.793969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.794070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.794091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.797725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.797828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.797850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.838 [2024-12-06 14:41:08.801976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:01.838 [2024-12-06 14:41:08.802093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.838 [2024-12-06 14:41:08.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.098 [2024-12-06 14:41:08.806076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.098 [2024-12-06 14:41:08.806203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.098 [2024-12-06 14:41:08.806223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.098 [2024-12-06 14:41:08.810143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.098 [2024-12-06 14:41:08.810287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.098 [2024-12-06 14:41:08.810308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.098 [2024-12-06 14:41:08.814036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.098 [2024-12-06 14:41:08.814231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.098 [2024-12-06 14:41:08.814251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.098 [2024-12-06 14:41:08.817781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.098 [2024-12-06 14:41:08.818012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.098 [2024-12-06 14:41:08.818042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.821589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.821735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.825359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.825484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.825505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.829157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.829253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.829273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.832859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.832945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.832965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.836614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.836719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.836740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.840359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.840478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.840498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.844201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.844376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.844396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.848378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.848643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.852765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.852881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.852902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.856630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.856710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.856730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.860299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.860374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.860394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.863987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.864069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.864089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.867772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.867877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.867897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.871533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.871638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.871658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.875296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.875489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.875510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.879029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.879200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.879220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.882828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.882941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.882961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.886534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.886625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.886645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.890274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.890349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.890369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.894025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.894116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.894137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.897759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.897874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.897895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.901485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.901590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.901611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.905258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.905447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.905467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.908976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.909131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.909152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.912720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.912840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.912862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.916505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.099 [2024-12-06 14:41:08.916588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.099 [2024-12-06 14:41:08.916608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.099 [2024-12-06 14:41:08.920176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.920252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.920272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.923881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.923971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.923992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.927600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.927702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.927722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.931329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.931446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.931467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.935136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.935313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.935334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.938848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.939011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.939031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.942638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.942741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.942762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.946362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.946478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.946499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.950078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.950193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.953699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.953819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.957368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.957484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.957504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.961140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.961246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.961267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.964920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.965095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.965116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.968671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.968836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.968857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.972389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.972511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.972532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.976078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.976215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.976235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.979785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.979874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.979894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.983489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.983582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.983603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.987547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.987650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.987671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.991322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.991443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.991464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.995205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.995382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.995403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:08.998951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:08.999151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:08.999171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:09.002839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:09.002960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:09.002981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.100 [2024-12-06 14:41:09.006625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.100 [2024-12-06 14:41:09.006722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.100 [2024-12-06 14:41:09.006742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.010482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.010559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.010579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.014294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.014393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.014414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.018099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.018223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.018244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.021915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.022062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.022084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.025763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.025977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.026037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.029514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.029714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.029736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.033315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.033449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.033472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.037154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.037241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.037261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.040894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.040990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.041010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.044735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.044811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.044830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.048512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.048627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.048647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.052295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.052401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.056075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.056252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.056273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.059837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.060011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.060031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.101 [2024-12-06 14:41:09.064004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.101 [2024-12-06 14:41:09.064131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.101 [2024-12-06 14:41:09.064154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.068183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.068270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.068290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.072071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.072203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.072225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.076151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.076228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.076248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.079922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.080036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.080057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.083719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.083827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.083848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.087578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.087754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.087774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.091350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.091509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.091530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.095153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.095271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.095292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.098872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.098967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.098988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.102615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.102716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.102736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.106333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.106431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.106463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.110042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.110163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.110183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.113774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.113886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.113907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.117511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.117711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.117732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.121253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.121444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.121464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.124983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.125102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.125122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.128726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.128811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.128830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.132426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.132504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.132524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.136107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.136189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.136209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.139854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.139976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.143605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.143711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.143731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.147460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.147645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.147667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.151191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.151364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.151384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.154920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.155043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.158644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.362 [2024-12-06 14:41:09.158727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.362 [2024-12-06 14:41:09.158747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.362 [2024-12-06 14:41:09.162349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.162455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.162475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.166292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.166381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.166401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.170086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.170205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.170225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.173827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.173937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.173957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.177597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.177803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.177824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.181298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.181486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.181506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.185043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.185155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.185174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.188753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.188829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.188849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.192480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.192577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.192597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.196210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.196302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.196322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.199916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.200018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.200039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.203630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.203735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.203755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.207430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.207605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.207625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.211124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.211300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.211321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.214904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.215024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.215044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.218765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.218874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.218895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.222479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.222575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.222595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.226168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.226262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.226282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.229893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.229999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.230026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.233564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.233707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.233728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.237341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.237538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.237559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.241112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.241254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.241275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.244928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.245046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.245066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.248659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.248760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.248781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.252356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.252466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.252487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.256074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.256188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.259800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.259913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.363 [2024-12-06 14:41:09.259934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.363 [2024-12-06 14:41:09.263649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.363 [2024-12-06 14:41:09.263755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.263776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.267605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.267816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.267843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.271535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.271705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.275423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.275561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.275583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.279330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.279434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.279467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.283326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.283445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.283478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.287265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.287341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.287361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.291055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.291176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.291197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.294916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.295021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.295042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.298640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.298824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.298845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.302360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.302585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.302606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.306156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.306258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.306278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.309991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.310094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.310129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.313660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.313775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.313796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.317353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.317459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.317479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.321081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.321196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.321216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.364 [2024-12-06 14:41:09.324962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.364 [2024-12-06 14:41:09.325075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.364 [2024-12-06 14:41:09.325112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.329333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.329549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.329570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.333209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.333367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.333387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.337347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.337464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.337485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.341185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.341270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.341290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.345007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.345104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.345124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.348765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.348859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.348879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.352573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.352705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.356348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.356462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.356482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.360161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.360336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.360356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.363917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.364099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.364119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.367672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.367793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.367813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.371354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.371495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.375070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.375146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.375166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.378778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.378865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.382513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.382617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.382637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.386261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.386367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.386387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.390023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.390218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.390239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.393793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.393969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.393992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.397617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.397764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.397786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.401625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.401765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.401788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.405443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.405527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.405547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.409211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.409286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.409306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.413086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.413186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.625 [2024-12-06 14:41:09.413206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.625 [2024-12-06 14:41:09.416838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.625 [2024-12-06 14:41:09.416944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.416964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.420749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.420929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.420950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.424574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.424762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.424782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.428383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.428522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.432191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.432277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.432297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.435923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.436020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.436040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.439676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.439751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.439771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.443493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.443603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.443623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.447367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.447487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.447508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.451274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.451463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.451483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.455063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.455239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.455259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.458830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.458933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.458954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.462582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.462689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.462709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.466416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.466521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.466541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.470182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.470269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.470289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.473945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.474084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.474105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.477729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.477859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.477881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.481567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.481797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.481831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.485373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.485562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.485582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.489151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.489255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.489275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.492878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.492971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.492991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.496716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.496805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.496826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.500422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.500497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.500517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.504178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.504282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.504302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.507950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.508058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.508079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.511735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.511916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.511936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.515534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.515612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.626 [2024-12-06 14:41:09.515633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.626 [2024-12-06 14:41:09.519291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.626 [2024-12-06 14:41:09.519490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.519511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.523043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.523180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.523201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.526692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.526775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.526795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.530363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.530479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.530499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.533990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.534103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.534124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.537729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.537864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.537885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.541424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.541524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.541545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.545252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.545427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.545459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.548948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.549112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.549132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.552641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.552790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.552810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.556344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.556433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.556454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.560013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.560100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.560119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.563733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.563809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.563828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.567506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.567632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.567652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.571226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.571354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.571374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.575031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.575207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.575227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.578677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.578875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.578904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.582363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.582533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.582553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.586069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.586162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.586182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.627 [2024-12-06 14:41:09.590179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.627 [2024-12-06 14:41:09.590274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.627 [2024-12-06 14:41:09.590295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.886 [2024-12-06 14:41:09.594413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.886 [2024-12-06 14:41:09.594500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.886 [2024-12-06 14:41:09.594532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.886 [2024-12-06 14:41:09.598333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.886 [2024-12-06 14:41:09.598498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.886 [2024-12-06 14:41:09.598521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.886 [2024-12-06 14:41:09.602629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.886 [2024-12-06 14:41:09.602784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.886 [2024-12-06 14:41:09.602820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.886 [2024-12-06 14:41:09.606967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.886 [2024-12-06 14:41:09.607141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.886 [2024-12-06 14:41:09.607162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.886 [2024-12-06 14:41:09.610736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.886 [2024-12-06 14:41:09.610956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.886 [2024-12-06 14:41:09.610976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.886 [2024-12-06 14:41:09.614389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x74ba90) with pdu=0x2000190fef90 00:27:02.886 [2024-12-06 14:41:09.614487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.886 [2024-12-06 14:41:09.614507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.886 00:27:02.886 Latency(us) 00:27:02.886 [2024-12-06T14:41:09.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.886 [2024-12-06T14:41:09.856Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:02.886 nvme0n1 : 2.00 8102.48 1012.81 0.00 0.00 1970.38 1563.93 11856.06 00:27:02.886 [2024-12-06T14:41:09.856Z] =================================================================================================================== 00:27:02.886 [2024-12-06T14:41:09.856Z] Total : 8102.48 1012.81 0.00 0.00 1970.38 1563.93 11856.06 00:27:02.886 0 00:27:02.886 14:41:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:02.886 14:41:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:02.886 14:41:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:02.886 14:41:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:02.886 | .driver_specific 00:27:02.886 | .nvme_error 00:27:02.886 | .status_code 00:27:02.886 | .command_transient_transport_error' 00:27:03.145 14:41:09 -- host/digest.sh@71 -- # (( 523 > 0 )) 00:27:03.145 14:41:09 -- host/digest.sh@73 -- # killprocess 87867 00:27:03.145 14:41:09 -- common/autotest_common.sh@936 -- # '[' -z 87867 ']' 00:27:03.145 14:41:09 -- common/autotest_common.sh@940 -- # kill -0 87867 00:27:03.145 14:41:09 -- common/autotest_common.sh@941 -- # uname 00:27:03.145 14:41:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:03.145 14:41:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87867 00:27:03.145 14:41:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:03.145 14:41:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:03.145 killing process with pid 87867 00:27:03.145 14:41:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87867' 00:27:03.145 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.145 00:27:03.145 Latency(us) 00:27:03.145 [2024-12-06T14:41:10.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.145 [2024-12-06T14:41:10.115Z] =================================================================================================================== 00:27:03.145 [2024-12-06T14:41:10.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.145 14:41:09 -- common/autotest_common.sh@955 -- # kill 87867 00:27:03.145 14:41:09 -- common/autotest_common.sh@960 -- # wait 87867 00:27:03.404 14:41:10 -- host/digest.sh@115 -- # killprocess 87552 00:27:03.404 14:41:10 -- common/autotest_common.sh@936 -- # '[' -z 87552 ']' 00:27:03.404 14:41:10 -- common/autotest_common.sh@940 -- # kill -0 87552 00:27:03.404 14:41:10 -- common/autotest_common.sh@941 -- # uname 00:27:03.404 14:41:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:03.404 14:41:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87552 00:27:03.405 14:41:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:03.405 14:41:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:03.405 killing process with pid 87552 00:27:03.405 14:41:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87552' 00:27:03.405 14:41:10 -- common/autotest_common.sh@955 -- # kill 87552 00:27:03.405 14:41:10 -- common/autotest_common.sh@960 -- # wait 87552 00:27:03.663 00:27:03.663 real 0m19.024s 00:27:03.663 user 0m35.958s 00:27:03.663 sys 0m4.891s 00:27:03.663 14:41:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.663 14:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.663 ************************************ 00:27:03.663 END TEST nvmf_digest_error 00:27:03.663 ************************************ 00:27:03.663 14:41:10 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:27:03.663 14:41:10 -- host/digest.sh@139 -- # nvmftestfini 00:27:03.663 14:41:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.663 14:41:10 -- nvmf/common.sh@116 -- # sync 00:27:03.923 14:41:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:03.923 14:41:10 -- nvmf/common.sh@119 -- # set +e 00:27:03.923 14:41:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.923 14:41:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:03.923 rmmod nvme_tcp 00:27:03.923 rmmod nvme_fabrics 00:27:03.923 rmmod nvme_keyring 00:27:03.923 14:41:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.923 14:41:10 -- nvmf/common.sh@123 -- # set -e 00:27:03.923 14:41:10 -- nvmf/common.sh@124 -- # return 0 00:27:03.923 14:41:10 -- nvmf/common.sh@477 -- # '[' -n 87552 ']' 00:27:03.923 14:41:10 -- nvmf/common.sh@478 -- # killprocess 87552 00:27:03.923 14:41:10 -- common/autotest_common.sh@936 -- # '[' -z 87552 ']' 00:27:03.923 14:41:10 -- common/autotest_common.sh@940 -- # kill -0 87552 00:27:03.923 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87552) - No such process 00:27:03.923 Process with pid 87552 is not found 00:27:03.923 14:41:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87552 is not found' 00:27:03.923 14:41:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:03.923 14:41:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.923 14:41:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.923 14:41:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.923 14:41:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.923 14:41:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.923 14:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.923 14:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.923 14:41:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:03.923 00:27:03.923 real 0m39.041s 00:27:03.923 user 1m13.055s 00:27:03.923 sys 0m9.977s 00:27:03.923 14:41:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.923 14:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.923 ************************************ 00:27:03.923 END TEST nvmf_digest 00:27:03.923 ************************************ 00:27:03.923 14:41:10 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:27:03.923 14:41:10 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:27:03.923 14:41:10 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:03.923 14:41:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:03.923 14:41:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.923 14:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.923 ************************************ 00:27:03.923 START TEST nvmf_mdns_discovery 00:27:03.923 ************************************ 00:27:03.923 14:41:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:04.182 * Looking for test storage... 00:27:04.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:04.182 14:41:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:04.182 14:41:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:04.182 14:41:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:04.182 14:41:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:04.182 14:41:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:04.182 14:41:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:04.182 14:41:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:04.182 14:41:10 -- scripts/common.sh@335 -- # IFS=.-: 00:27:04.182 14:41:10 -- scripts/common.sh@335 -- # read -ra ver1 00:27:04.182 14:41:10 -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.182 14:41:10 -- scripts/common.sh@336 -- # read -ra ver2 00:27:04.182 14:41:10 -- scripts/common.sh@337 -- # local 'op=<' 00:27:04.182 14:41:10 -- scripts/common.sh@339 -- # ver1_l=2 00:27:04.182 14:41:10 -- scripts/common.sh@340 -- # ver2_l=1 00:27:04.182 14:41:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:04.182 14:41:10 -- scripts/common.sh@343 -- # case "$op" in 00:27:04.182 14:41:10 -- scripts/common.sh@344 -- # : 1 00:27:04.182 14:41:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:04.182 14:41:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.182 14:41:10 -- scripts/common.sh@364 -- # decimal 1 00:27:04.182 14:41:10 -- scripts/common.sh@352 -- # local d=1 00:27:04.182 14:41:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.182 14:41:10 -- scripts/common.sh@354 -- # echo 1 00:27:04.182 14:41:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:04.182 14:41:10 -- scripts/common.sh@365 -- # decimal 2 00:27:04.182 14:41:10 -- scripts/common.sh@352 -- # local d=2 00:27:04.182 14:41:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.182 14:41:10 -- scripts/common.sh@354 -- # echo 2 00:27:04.182 14:41:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:04.182 14:41:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:04.182 14:41:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:04.182 14:41:10 -- scripts/common.sh@367 -- # return 0 00:27:04.182 14:41:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.182 14:41:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.182 --rc genhtml_branch_coverage=1 00:27:04.182 --rc genhtml_function_coverage=1 00:27:04.182 --rc genhtml_legend=1 00:27:04.182 --rc geninfo_all_blocks=1 00:27:04.182 --rc geninfo_unexecuted_blocks=1 00:27:04.182 00:27:04.182 ' 00:27:04.182 14:41:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.182 --rc genhtml_branch_coverage=1 00:27:04.182 --rc genhtml_function_coverage=1 00:27:04.182 --rc genhtml_legend=1 00:27:04.182 --rc geninfo_all_blocks=1 00:27:04.182 --rc geninfo_unexecuted_blocks=1 00:27:04.182 00:27:04.182 ' 00:27:04.182 14:41:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.182 --rc genhtml_branch_coverage=1 00:27:04.182 --rc genhtml_function_coverage=1 00:27:04.182 --rc genhtml_legend=1 00:27:04.182 --rc geninfo_all_blocks=1 00:27:04.182 --rc geninfo_unexecuted_blocks=1 00:27:04.182 00:27:04.182 ' 00:27:04.182 14:41:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.182 --rc genhtml_branch_coverage=1 00:27:04.182 --rc genhtml_function_coverage=1 00:27:04.182 --rc genhtml_legend=1 00:27:04.182 --rc geninfo_all_blocks=1 00:27:04.182 --rc geninfo_unexecuted_blocks=1 00:27:04.182 00:27:04.182 ' 00:27:04.182 14:41:10 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:04.182 14:41:10 -- nvmf/common.sh@7 -- # uname -s 00:27:04.182 14:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.182 14:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.182 14:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.182 14:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.182 14:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.182 14:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.182 14:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.182 14:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.182 14:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.182 14:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.183 14:41:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:27:04.183 14:41:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:27:04.183 14:41:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.183 14:41:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.183 14:41:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:04.183 14:41:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:04.183 14:41:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.183 14:41:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.183 14:41:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.183 14:41:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.183 14:41:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.183 14:41:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.183 14:41:11 -- paths/export.sh@5 -- # export PATH 00:27:04.183 14:41:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.183 14:41:11 -- nvmf/common.sh@46 -- # : 0 00:27:04.183 14:41:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:04.183 14:41:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:04.183 14:41:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:04.183 14:41:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.183 14:41:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.183 14:41:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:04.183 14:41:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:04.183 14:41:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:27:04.183 14:41:11 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:27:04.183 14:41:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:04.183 14:41:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.183 14:41:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:04.183 14:41:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:04.183 14:41:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:04.183 14:41:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.183 14:41:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.183 14:41:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.183 14:41:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:04.183 14:41:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:04.183 14:41:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:04.183 14:41:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:04.183 14:41:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:04.183 14:41:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:04.183 14:41:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.183 14:41:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.183 14:41:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:04.183 14:41:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:04.183 14:41:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:04.183 14:41:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:04.183 14:41:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:04.183 14:41:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.183 14:41:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:04.183 14:41:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:04.183 14:41:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:04.183 14:41:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:04.183 14:41:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:04.183 14:41:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:04.183 Cannot find device "nvmf_tgt_br" 00:27:04.183 14:41:11 -- nvmf/common.sh@154 -- # true 00:27:04.183 14:41:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:04.183 Cannot find device "nvmf_tgt_br2" 00:27:04.183 14:41:11 -- nvmf/common.sh@155 -- # true 00:27:04.183 14:41:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:04.183 14:41:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:04.183 Cannot find device "nvmf_tgt_br" 00:27:04.183 14:41:11 -- nvmf/common.sh@157 -- # true 00:27:04.183 14:41:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:04.183 Cannot find device "nvmf_tgt_br2" 00:27:04.183 14:41:11 -- nvmf/common.sh@158 -- # true 00:27:04.183 14:41:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:04.183 14:41:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:04.442 14:41:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:04.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.442 14:41:11 -- nvmf/common.sh@161 -- # true 00:27:04.442 14:41:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:04.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.442 14:41:11 -- nvmf/common.sh@162 -- # true 00:27:04.442 14:41:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:04.442 14:41:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:04.442 14:41:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:04.442 14:41:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:04.442 14:41:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:04.442 14:41:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:04.442 14:41:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:04.442 14:41:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:04.442 14:41:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:04.442 14:41:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:04.442 14:41:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:04.442 14:41:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:04.442 14:41:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:04.442 14:41:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:04.442 14:41:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:04.442 14:41:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:04.442 14:41:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:04.442 14:41:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:04.442 14:41:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:04.442 14:41:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:04.442 14:41:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:04.442 14:41:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:04.442 14:41:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:04.442 14:41:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:04.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:27:04.442 00:27:04.442 --- 10.0.0.2 ping statistics --- 00:27:04.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.442 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:27:04.442 14:41:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:04.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:04.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:27:04.442 00:27:04.442 --- 10.0.0.3 ping statistics --- 00:27:04.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.442 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:04.442 14:41:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:04.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:27:04.442 00:27:04.442 --- 10.0.0.1 ping statistics --- 00:27:04.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.442 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:04.442 14:41:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.442 14:41:11 -- nvmf/common.sh@421 -- # return 0 00:27:04.442 14:41:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:04.442 14:41:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.442 14:41:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:04.442 14:41:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:04.442 14:41:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.443 14:41:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:04.443 14:41:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:04.701 14:41:11 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:04.701 14:41:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:04.701 14:41:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.701 14:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:04.701 14:41:11 -- nvmf/common.sh@469 -- # nvmfpid=88172 00:27:04.701 14:41:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:04.701 14:41:11 -- nvmf/common.sh@470 -- # waitforlisten 88172 00:27:04.701 14:41:11 -- common/autotest_common.sh@829 -- # '[' -z 88172 ']' 00:27:04.701 14:41:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.701 14:41:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.701 14:41:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.701 14:41:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.701 14:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:04.701 [2024-12-06 14:41:11.477253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:04.701 [2024-12-06 14:41:11.477328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.701 [2024-12-06 14:41:11.614287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.960 [2024-12-06 14:41:11.730121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:04.960 [2024-12-06 14:41:11.730282] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.960 [2024-12-06 14:41:11.730298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.960 [2024-12-06 14:41:11.730309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.960 [2024-12-06 14:41:11.730346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.526 14:41:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.526 14:41:12 -- common/autotest_common.sh@862 -- # return 0 00:27:05.526 14:41:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:05.526 14:41:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.526 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 14:41:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 [2024-12-06 14:41:12.661879] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 [2024-12-06 14:41:12.674041] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 null0 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 null1 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 null2 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 null3 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:27:05.785 14:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:05.785 14:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@47 -- # hostpid=88222 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@48 -- # waitforlisten 88222 /tmp/host.sock 00:27:05.785 14:41:12 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:05.785 14:41:12 -- common/autotest_common.sh@829 -- # '[' -z 88222 ']' 00:27:05.785 14:41:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:05.785 14:41:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.785 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:05.785 14:41:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:05.785 14:41:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.785 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:06.045 [2024-12-06 14:41:12.783386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.045 [2024-12-06 14:41:12.783497] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88222 ] 00:27:06.045 [2024-12-06 14:41:12.922196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.304 [2024-12-06 14:41:13.042349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:06.304 [2024-12-06 14:41:13.042586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.871 14:41:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.871 14:41:13 -- common/autotest_common.sh@862 -- # return 0 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@57 -- # avahipid=88252 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@58 -- # sleep 1 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:06.871 14:41:13 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:06.871 Process 1069 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:06.871 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:06.871 Successfully dropped root privileges. 00:27:06.871 avahi-daemon 0.8 starting up. 00:27:06.871 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:06.871 Successfully called chroot(). 00:27:06.871 Successfully dropped remaining capabilities. 00:27:07.806 No service file found in /etc/avahi/services. 00:27:07.806 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:07.806 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:07.806 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:07.806 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:07.806 Network interface enumeration completed. 00:27:07.807 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:27:07.807 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:07.807 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:27:07.807 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:07.807 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 93772280. 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # xargs 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # sort 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # sort 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # xargs 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # sort 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@68 -- # xargs 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.066 14:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # xargs 00:27:08.066 14:41:14 -- host/mdns_discovery.sh@64 -- # sort 00:27:08.066 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:15 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:08.066 14:41:15 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:08.066 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.066 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.066 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.066 14:41:15 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:27:08.066 14:41:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.067 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@68 -- # sort 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@68 -- # xargs 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@64 -- # sort 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@64 -- # xargs 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 [2024-12-06 14:41:15.102448] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 [2024-12-06 14:41:15.142742] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 [2024-12-06 14:41:15.182668] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:08.325 14:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.325 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:08.325 [2024-12-06 14:41:15.190697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:08.325 14:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=88303 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:08.325 14:41:15 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:27:09.260 [2024-12-06 14:41:16.002445] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:09.260 Established under name 'CDC' 00:27:09.519 [2024-12-06 14:41:16.402455] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:09.519 [2024-12-06 14:41:16.402476] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:27:09.519 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:09.519 cookie is 0 00:27:09.519 is_local: 1 00:27:09.519 our_own: 0 00:27:09.519 wide_area: 0 00:27:09.519 multicast: 1 00:27:09.519 cached: 1 00:27:09.779 [2024-12-06 14:41:16.502445] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:09.779 [2024-12-06 14:41:16.502472] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:27:09.779 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:09.779 cookie is 0 00:27:09.779 is_local: 1 00:27:09.779 our_own: 0 00:27:09.779 wide_area: 0 00:27:09.779 multicast: 1 00:27:09.779 cached: 1 00:27:10.768 [2024-12-06 14:41:17.407255] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:10.768 [2024-12-06 14:41:17.407283] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:10.768 [2024-12-06 14:41:17.407301] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:10.768 [2024-12-06 14:41:17.493348] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:10.768 [2024-12-06 14:41:17.506890] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:10.768 [2024-12-06 14:41:17.506909] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:10.768 [2024-12-06 14:41:17.506928] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:10.768 [2024-12-06 14:41:17.551562] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:10.768 [2024-12-06 14:41:17.551587] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:10.768 [2024-12-06 14:41:17.594754] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:10.768 [2024-12-06 14:41:17.656282] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:10.768 [2024-12-06 14:41:17.656306] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:13.316 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:13.316 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@80 -- # xargs 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@80 -- # sort 00:27:13.316 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:13.316 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.316 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@76 -- # sort 00:27:13.316 14:41:20 -- host/mdns_discovery.sh@76 -- # xargs 00:27:13.316 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:13.575 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.575 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@68 -- # sort 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@68 -- # xargs 00:27:13.575 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:13.575 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@64 -- # xargs 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@64 -- # sort 00:27:13.575 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.575 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:13.575 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.575 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # xargs 00:27:13.575 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:13.575 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.575 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # xargs 00:27:13.575 14:41:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:13.575 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:13.834 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.834 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:13.834 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:27:13.834 14:41:20 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:13.835 14:41:20 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:13.835 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.835 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.835 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.835 14:41:20 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:13.835 14:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.835 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.835 14:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.835 14:41:20 -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:14.785 14:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.785 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@64 -- # xargs 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@64 -- # sort 00:27:14.785 14:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:14.785 14:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.785 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:14.785 14:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:14.785 14:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.785 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.785 [2024-12-06 14:41:21.745489] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:14.785 [2024-12-06 14:41:21.746134] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:14.785 [2024-12-06 14:41:21.746169] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:14.785 [2024-12-06 14:41:21.746202] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:14.785 [2024-12-06 14:41:21.746216] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:14.785 14:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.785 14:41:21 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:14.785 14:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.785 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:27:15.046 [2024-12-06 14:41:21.753306] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:15.046 [2024-12-06 14:41:21.754164] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:15.046 [2024-12-06 14:41:21.754227] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:15.046 14:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.046 14:41:21 -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:15.046 [2024-12-06 14:41:21.885254] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:15.046 [2024-12-06 14:41:21.885401] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:15.046 [2024-12-06 14:41:21.942465] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:15.046 [2024-12-06 14:41:21.942489] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:15.046 [2024-12-06 14:41:21.942495] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:15.046 [2024-12-06 14:41:21.942511] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:15.046 [2024-12-06 14:41:21.942597] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:15.046 [2024-12-06 14:41:21.942607] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:15.046 [2024-12-06 14:41:21.942612] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:15.046 [2024-12-06 14:41:21.942623] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:15.046 [2024-12-06 14:41:21.988339] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:15.046 [2024-12-06 14:41:21.988358] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:15.046 [2024-12-06 14:41:21.989337] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:15.046 [2024-12-06 14:41:21.989352] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@68 -- # sort 00:27:15.980 14:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@68 -- # xargs 00:27:15.980 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:27:15.980 14:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.980 14:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.980 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@64 -- # sort 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@64 -- # xargs 00:27:15.980 14:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:15.980 14:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.980 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:15.980 14:41:22 -- host/mdns_discovery.sh@72 -- # xargs 00:27:15.980 14:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.240 14:41:22 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:16.240 14:41:22 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:16.240 14:41:22 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:16.240 14:41:22 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:16.240 14:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.240 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:27:16.240 14:41:22 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:16.240 14:41:22 -- host/mdns_discovery.sh@72 -- # xargs 00:27:16.240 14:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:16.240 14:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.240 14:41:23 -- common/autotest_common.sh@10 -- # set +x 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:16.240 14:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:16.240 14:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.240 14:41:23 -- common/autotest_common.sh@10 -- # set +x 00:27:16.240 [2024-12-06 14:41:23.066350] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:16.240 [2024-12-06 14:41:23.066553] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.240 [2024-12-06 14:41:23.066604] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:16.240 [2024-12-06 14:41:23.066618] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:16.240 [2024-12-06 14:41:23.067167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.067200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.067211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.067219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.067228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.067236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.067244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.067251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.067259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.240 14:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:16.240 14:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.240 14:41:23 -- common/autotest_common.sh@10 -- # set +x 00:27:16.240 [2024-12-06 14:41:23.074372] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:16.240 [2024-12-06 14:41:23.074588] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:16.240 [2024-12-06 14:41:23.077126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.240 14:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.240 14:41:23 -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:16.240 [2024-12-06 14:41:23.079897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.079929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.079941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.079949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.079957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.079966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.079974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.240 [2024-12-06 14:41:23.079981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.240 [2024-12-06 14:41:23.079989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.240 [2024-12-06 14:41:23.087145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.240 [2024-12-06 14:41:23.087233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.087276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.087292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.241 [2024-12-06 14:41:23.087301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.087316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.087328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.087336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.087346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.241 [2024-12-06 14:41:23.087360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.089863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.097195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.241 [2024-12-06 14:41:23.097437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.097487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.097502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.241 [2024-12-06 14:41:23.097512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.097544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.097560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.097568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.097576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.241 [2024-12-06 14:41:23.097591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.099872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.241 [2024-12-06 14:41:23.099945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.099985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.100000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.241 [2024-12-06 14:41:23.100009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.100023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.100035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.100042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.100050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.241 [2024-12-06 14:41:23.100062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.107381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.241 [2024-12-06 14:41:23.107634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.107816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.107868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.241 [2024-12-06 14:41:23.108092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.108114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.108128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.108136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.108144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.241 [2024-12-06 14:41:23.108157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.109919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.241 [2024-12-06 14:41:23.110000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.110044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.110059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.241 [2024-12-06 14:41:23.110085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.110099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.110111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.110118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.110126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.241 [2024-12-06 14:41:23.110138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.117582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.241 [2024-12-06 14:41:23.117771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.117815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.117840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.241 [2024-12-06 14:41:23.117850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.117866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.117879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.117887] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.117895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.241 [2024-12-06 14:41:23.117909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.119967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.241 [2024-12-06 14:41:23.120039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.120078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.120093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.241 [2024-12-06 14:41:23.120101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.120115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.120128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.120135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.120142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.241 [2024-12-06 14:41:23.120154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.127737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.241 [2024-12-06 14:41:23.127817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.127859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.127874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.241 [2024-12-06 14:41:23.127883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.241 [2024-12-06 14:41:23.127897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.241 [2024-12-06 14:41:23.127909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.241 [2024-12-06 14:41:23.127916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.241 [2024-12-06 14:41:23.127925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.241 [2024-12-06 14:41:23.127937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.241 [2024-12-06 14:41:23.130028] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.241 [2024-12-06 14:41:23.130117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.130157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.241 [2024-12-06 14:41:23.130172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.241 [2024-12-06 14:41:23.130182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.130195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.130217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.130233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.130240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.242 [2024-12-06 14:41:23.130252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.137787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.242 [2024-12-06 14:41:23.137860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.137900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.137915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.242 [2024-12-06 14:41:23.137925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.137939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.137951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.137959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.137967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.242 [2024-12-06 14:41:23.137980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.140078] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.242 [2024-12-06 14:41:23.140321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.140366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.140382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.242 [2024-12-06 14:41:23.140392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.140420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.140455] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.140465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.140473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.242 [2024-12-06 14:41:23.140487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.147834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.242 [2024-12-06 14:41:23.147908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.147947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.147962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.242 [2024-12-06 14:41:23.147971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.147985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.147997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.148004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.148012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.242 [2024-12-06 14:41:23.148024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.150284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.242 [2024-12-06 14:41:23.150531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.150577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.150594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.242 [2024-12-06 14:41:23.150604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.150636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.150651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.150659] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.150668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.242 [2024-12-06 14:41:23.150682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.157882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.242 [2024-12-06 14:41:23.158112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.158156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.158171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.242 [2024-12-06 14:41:23.158181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.158196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.158210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.158217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.158226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.242 [2024-12-06 14:41:23.158250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.160492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.242 [2024-12-06 14:41:23.160568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.160608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.160622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.242 [2024-12-06 14:41:23.160631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.160659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.160673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.160680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.160687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.242 [2024-12-06 14:41:23.160700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.168059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.242 [2024-12-06 14:41:23.168139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.168181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.168196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.242 [2024-12-06 14:41:23.168205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.168219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.168231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.168239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.168246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.242 [2024-12-06 14:41:23.168258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.170538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.242 [2024-12-06 14:41:23.170620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.170660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.170675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.242 [2024-12-06 14:41:23.170684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.242 [2024-12-06 14:41:23.170697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.242 [2024-12-06 14:41:23.170709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.242 [2024-12-06 14:41:23.170717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.242 [2024-12-06 14:41:23.170724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.242 [2024-12-06 14:41:23.170736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.242 [2024-12-06 14:41:23.178127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.242 [2024-12-06 14:41:23.178198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.178236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.242 [2024-12-06 14:41:23.178250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.242 [2024-12-06 14:41:23.178259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.243 [2024-12-06 14:41:23.178273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.243 [2024-12-06 14:41:23.178294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.243 [2024-12-06 14:41:23.178304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.243 [2024-12-06 14:41:23.178311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.243 [2024-12-06 14:41:23.178323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.243 [2024-12-06 14:41:23.180583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.243 [2024-12-06 14:41:23.180650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.180689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.180703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.243 [2024-12-06 14:41:23.180712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.243 [2024-12-06 14:41:23.180725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.243 [2024-12-06 14:41:23.180737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.243 [2024-12-06 14:41:23.180745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.243 [2024-12-06 14:41:23.180753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.243 [2024-12-06 14:41:23.180764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.243 [2024-12-06 14:41:23.188174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.243 [2024-12-06 14:41:23.188397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.188460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.188476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.243 [2024-12-06 14:41:23.188486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.243 [2024-12-06 14:41:23.188502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.243 [2024-12-06 14:41:23.188515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.243 [2024-12-06 14:41:23.188523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.243 [2024-12-06 14:41:23.188531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.243 [2024-12-06 14:41:23.188545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.243 [2024-12-06 14:41:23.190627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.243 [2024-12-06 14:41:23.190699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.190738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.190753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.243 [2024-12-06 14:41:23.190762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.243 [2024-12-06 14:41:23.190776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.243 [2024-12-06 14:41:23.190788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.243 [2024-12-06 14:41:23.190795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.243 [2024-12-06 14:41:23.190803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.243 [2024-12-06 14:41:23.190815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.243 [2024-12-06 14:41:23.198361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:16.243 [2024-12-06 14:41:23.198633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.198777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.198883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914b70 with addr=10.0.0.2, port=4420 00:27:16.243 [2024-12-06 14:41:23.199102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914b70 is same with the state(5) to be set 00:27:16.243 [2024-12-06 14:41:23.199325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x914b70 (9): Bad file descriptor 00:27:16.243 [2024-12-06 14:41:23.199465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.243 [2024-12-06 14:41:23.199480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.243 [2024-12-06 14:41:23.199489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.243 [2024-12-06 14:41:23.199504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.243 [2024-12-06 14:41:23.200673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:16.243 [2024-12-06 14:41:23.200750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.200793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.243 [2024-12-06 14:41:23.200808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0410 with addr=10.0.0.3, port=4420 00:27:16.243 [2024-12-06 14:41:23.200818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0410 is same with the state(5) to be set 00:27:16.243 [2024-12-06 14:41:23.200833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0410 (9): Bad file descriptor 00:27:16.243 [2024-12-06 14:41:23.200846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:16.243 [2024-12-06 14:41:23.200854] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:16.243 [2024-12-06 14:41:23.200862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:16.243 [2024-12-06 14:41:23.200875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.502 [2024-12-06 14:41:23.206617] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:16.502 [2024-12-06 14:41:23.206643] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:16.502 [2024-12-06 14:41:23.206662] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.502 [2024-12-06 14:41:23.206693] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:16.502 [2024-12-06 14:41:23.206707] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:16.502 [2024-12-06 14:41:23.206720] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:16.502 [2024-12-06 14:41:23.292684] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:16.502 [2024-12-06 14:41:23.292733] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:17.439 14:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@68 -- # sort 00:27:17.439 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@68 -- # xargs 00:27:17.439 14:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:17.439 14:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@64 -- # sort 00:27:17.439 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@64 -- # xargs 00:27:17.439 14:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:17.439 14:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.439 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # xargs 00:27:17.439 14:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:17.439 14:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.439 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@72 -- # xargs 00:27:17.439 14:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:17.439 14:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.439 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:17.439 14:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:17.439 14:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.439 14:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:17.439 14:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.439 14:41:24 -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:17.439 [2024-12-06 14:41:24.402479] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:18.816 14:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@80 -- # sort 00:27:18.816 14:41:25 -- common/autotest_common.sh@10 -- # set +x 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@80 -- # xargs 00:27:18.816 14:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@68 -- # sort 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:18.816 14:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.816 14:41:25 -- common/autotest_common.sh@10 -- # set +x 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@68 -- # xargs 00:27:18.816 14:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@64 -- # sort 00:27:18.816 14:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@64 -- # xargs 00:27:18.816 14:41:25 -- common/autotest_common.sh@10 -- # set +x 00:27:18.816 14:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:18.816 14:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.816 14:41:25 -- common/autotest_common.sh@10 -- # set +x 00:27:18.816 14:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:18.816 14:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.816 14:41:25 -- common/autotest_common.sh@10 -- # set +x 00:27:18.816 14:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.816 14:41:25 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:18.817 14:41:25 -- common/autotest_common.sh@650 -- # local es=0 00:27:18.817 14:41:25 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:18.817 14:41:25 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:18.817 14:41:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.817 14:41:25 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:18.817 14:41:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.817 14:41:25 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:18.817 14:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.817 14:41:25 -- common/autotest_common.sh@10 -- # set +x 00:27:18.817 [2024-12-06 14:41:25.619537] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:18.817 2024/12/06 14:41:25 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:18.817 request: 00:27:18.817 { 00:27:18.817 "method": "bdev_nvme_start_mdns_discovery", 00:27:18.817 "params": { 00:27:18.817 "name": "mdns", 00:27:18.817 "svcname": "_nvme-disc._http", 00:27:18.817 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:18.817 } 00:27:18.817 } 00:27:18.817 Got JSON-RPC error response 00:27:18.817 GoRPCClient: error on JSON-RPC call 00:27:18.817 14:41:25 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:18.817 14:41:25 -- common/autotest_common.sh@653 -- # es=1 00:27:18.817 14:41:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:18.817 14:41:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:18.817 14:41:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:18.817 14:41:25 -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:19.074 [2024-12-06 14:41:26.008124] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:19.330 [2024-12-06 14:41:26.108124] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:19.330 [2024-12-06 14:41:26.208130] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:19.330 [2024-12-06 14:41:26.208303] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:27:19.330 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:19.330 cookie is 0 00:27:19.330 is_local: 1 00:27:19.330 our_own: 0 00:27:19.330 wide_area: 0 00:27:19.331 multicast: 1 00:27:19.331 cached: 1 00:27:19.588 [2024-12-06 14:41:26.308128] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:19.588 [2024-12-06 14:41:26.308305] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:27:19.588 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:19.588 cookie is 0 00:27:19.588 is_local: 1 00:27:19.588 our_own: 0 00:27:19.588 wide_area: 0 00:27:19.588 multicast: 1 00:27:19.588 cached: 1 00:27:20.522 [2024-12-06 14:41:27.219399] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:20.523 [2024-12-06 14:41:27.219700] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:20.523 [2024-12-06 14:41:27.219762] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:20.523 [2024-12-06 14:41:27.307511] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:20.523 [2024-12-06 14:41:27.319325] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:20.523 [2024-12-06 14:41:27.319504] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:20.523 [2024-12-06 14:41:27.319560] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:20.523 [2024-12-06 14:41:27.375918] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:20.523 [2024-12-06 14:41:27.376102] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:20.523 [2024-12-06 14:41:27.406231] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:20.523 [2024-12-06 14:41:27.465097] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:20.523 [2024-12-06 14:41:27.465271] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:23.798 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.798 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@80 -- # xargs 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@80 -- # sort 00:27:23.798 14:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@76 -- # xargs 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@76 -- # sort 00:27:23.798 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.798 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.798 14:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:23.798 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.798 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@64 -- # xargs 00:27:23.798 14:41:30 -- host/mdns_discovery.sh@64 -- # sort 00:27:24.056 14:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:24.056 14:41:30 -- common/autotest_common.sh@650 -- # local es=0 00:27:24.056 14:41:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:24.056 14:41:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:24.056 14:41:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:24.056 14:41:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:24.056 14:41:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:24.056 14:41:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:24.056 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 [2024-12-06 14:41:30.814550] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:24.056 2024/12/06 14:41:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:24.056 request: 00:27:24.056 { 00:27:24.056 "method": "bdev_nvme_start_mdns_discovery", 00:27:24.056 "params": { 00:27:24.056 "name": "cdc", 00:27:24.056 "svcname": "_nvme-disc._tcp", 00:27:24.056 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:24.056 } 00:27:24.056 } 00:27:24.056 Got JSON-RPC error response 00:27:24.056 GoRPCClient: error on JSON-RPC call 00:27:24.056 14:41:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:24.056 14:41:30 -- common/autotest_common.sh@653 -- # es=1 00:27:24.056 14:41:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:24.056 14:41:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:24.056 14:41:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@76 -- # sort 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@76 -- # xargs 00:27:24.056 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 14:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.056 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@64 -- # xargs 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@64 -- # sort 00:27:24.056 14:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:24.056 14:41:30 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:24.056 14:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.056 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:24.056 14:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.057 14:41:30 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:27:24.057 14:41:30 -- host/mdns_discovery.sh@197 -- # kill 88222 00:27:24.057 14:41:30 -- host/mdns_discovery.sh@200 -- # wait 88222 00:27:24.315 [2024-12-06 14:41:31.104721] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:24.315 14:41:31 -- host/mdns_discovery.sh@201 -- # kill 88303 00:27:24.315 Got SIGTERM, quitting. 00:27:24.315 14:41:31 -- host/mdns_discovery.sh@202 -- # kill 88252 00:27:24.315 14:41:31 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:27:24.315 14:41:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:24.315 Got SIGTERM, quitting. 00:27:24.315 14:41:31 -- nvmf/common.sh@116 -- # sync 00:27:24.315 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:24.315 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:24.315 avahi-daemon 0.8 exiting. 00:27:24.573 14:41:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:24.573 14:41:31 -- nvmf/common.sh@119 -- # set +e 00:27:24.573 14:41:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:24.573 14:41:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:24.573 rmmod nvme_tcp 00:27:24.573 rmmod nvme_fabrics 00:27:24.573 rmmod nvme_keyring 00:27:24.573 14:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:24.573 14:41:31 -- nvmf/common.sh@123 -- # set -e 00:27:24.573 14:41:31 -- nvmf/common.sh@124 -- # return 0 00:27:24.573 14:41:31 -- nvmf/common.sh@477 -- # '[' -n 88172 ']' 00:27:24.573 14:41:31 -- nvmf/common.sh@478 -- # killprocess 88172 00:27:24.573 14:41:31 -- common/autotest_common.sh@936 -- # '[' -z 88172 ']' 00:27:24.573 14:41:31 -- common/autotest_common.sh@940 -- # kill -0 88172 00:27:24.573 14:41:31 -- common/autotest_common.sh@941 -- # uname 00:27:24.573 14:41:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:24.573 14:41:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88172 00:27:24.573 14:41:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:24.573 14:41:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:24.573 killing process with pid 88172 00:27:24.573 14:41:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88172' 00:27:24.573 14:41:31 -- common/autotest_common.sh@955 -- # kill 88172 00:27:24.573 14:41:31 -- common/autotest_common.sh@960 -- # wait 88172 00:27:24.831 14:41:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:24.831 14:41:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:24.831 14:41:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:24.831 14:41:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.831 14:41:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:24.831 14:41:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.831 14:41:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.831 14:41:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.831 14:41:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:24.831 ************************************ 00:27:24.831 END TEST nvmf_mdns_discovery 00:27:24.831 ************************************ 00:27:24.831 00:27:24.831 real 0m20.860s 00:27:24.831 user 0m40.511s 00:27:24.831 sys 0m2.046s 00:27:24.831 14:41:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:24.831 14:41:31 -- common/autotest_common.sh@10 -- # set +x 00:27:24.831 14:41:31 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:27:24.831 14:41:31 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:24.831 14:41:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:24.831 14:41:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:24.831 14:41:31 -- common/autotest_common.sh@10 -- # set +x 00:27:24.831 ************************************ 00:27:24.831 START TEST nvmf_multipath 00:27:24.831 ************************************ 00:27:24.831 14:41:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:25.091 * Looking for test storage... 00:27:25.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:25.091 14:41:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:25.091 14:41:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:25.091 14:41:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:25.091 14:41:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:25.091 14:41:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:25.091 14:41:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:25.091 14:41:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:25.091 14:41:31 -- scripts/common.sh@335 -- # IFS=.-: 00:27:25.091 14:41:31 -- scripts/common.sh@335 -- # read -ra ver1 00:27:25.091 14:41:31 -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.091 14:41:31 -- scripts/common.sh@336 -- # read -ra ver2 00:27:25.091 14:41:31 -- scripts/common.sh@337 -- # local 'op=<' 00:27:25.091 14:41:31 -- scripts/common.sh@339 -- # ver1_l=2 00:27:25.091 14:41:31 -- scripts/common.sh@340 -- # ver2_l=1 00:27:25.091 14:41:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:25.091 14:41:31 -- scripts/common.sh@343 -- # case "$op" in 00:27:25.091 14:41:31 -- scripts/common.sh@344 -- # : 1 00:27:25.091 14:41:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:25.091 14:41:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.091 14:41:31 -- scripts/common.sh@364 -- # decimal 1 00:27:25.091 14:41:31 -- scripts/common.sh@352 -- # local d=1 00:27:25.091 14:41:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.091 14:41:31 -- scripts/common.sh@354 -- # echo 1 00:27:25.091 14:41:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:25.091 14:41:31 -- scripts/common.sh@365 -- # decimal 2 00:27:25.091 14:41:31 -- scripts/common.sh@352 -- # local d=2 00:27:25.091 14:41:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.091 14:41:31 -- scripts/common.sh@354 -- # echo 2 00:27:25.091 14:41:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:25.091 14:41:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:25.091 14:41:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:25.091 14:41:31 -- scripts/common.sh@367 -- # return 0 00:27:25.091 14:41:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.091 14:41:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.091 --rc genhtml_branch_coverage=1 00:27:25.091 --rc genhtml_function_coverage=1 00:27:25.091 --rc genhtml_legend=1 00:27:25.091 --rc geninfo_all_blocks=1 00:27:25.091 --rc geninfo_unexecuted_blocks=1 00:27:25.091 00:27:25.091 ' 00:27:25.091 14:41:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.091 --rc genhtml_branch_coverage=1 00:27:25.091 --rc genhtml_function_coverage=1 00:27:25.091 --rc genhtml_legend=1 00:27:25.091 --rc geninfo_all_blocks=1 00:27:25.091 --rc geninfo_unexecuted_blocks=1 00:27:25.091 00:27:25.091 ' 00:27:25.091 14:41:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.091 --rc genhtml_branch_coverage=1 00:27:25.091 --rc genhtml_function_coverage=1 00:27:25.091 --rc genhtml_legend=1 00:27:25.091 --rc geninfo_all_blocks=1 00:27:25.091 --rc geninfo_unexecuted_blocks=1 00:27:25.091 00:27:25.091 ' 00:27:25.091 14:41:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.091 --rc genhtml_branch_coverage=1 00:27:25.091 --rc genhtml_function_coverage=1 00:27:25.091 --rc genhtml_legend=1 00:27:25.091 --rc geninfo_all_blocks=1 00:27:25.091 --rc geninfo_unexecuted_blocks=1 00:27:25.091 00:27:25.091 ' 00:27:25.091 14:41:31 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:25.091 14:41:31 -- nvmf/common.sh@7 -- # uname -s 00:27:25.091 14:41:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.091 14:41:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.091 14:41:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.091 14:41:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.091 14:41:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.091 14:41:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.091 14:41:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.091 14:41:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.091 14:41:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.091 14:41:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.091 14:41:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:27:25.091 14:41:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:27:25.091 14:41:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.091 14:41:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.091 14:41:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:25.091 14:41:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:25.091 14:41:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.091 14:41:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.091 14:41:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.091 14:41:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.091 14:41:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.091 14:41:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.091 14:41:31 -- paths/export.sh@5 -- # export PATH 00:27:25.091 14:41:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.091 14:41:31 -- nvmf/common.sh@46 -- # : 0 00:27:25.091 14:41:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:25.091 14:41:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:25.092 14:41:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:25.092 14:41:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.092 14:41:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.092 14:41:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:25.092 14:41:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:25.092 14:41:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:25.092 14:41:31 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.092 14:41:31 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.092 14:41:31 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.092 14:41:31 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:25.092 14:41:31 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:25.092 14:41:31 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:25.092 14:41:31 -- host/multipath.sh@30 -- # nvmftestinit 00:27:25.092 14:41:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:25.092 14:41:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.092 14:41:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:25.092 14:41:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:25.092 14:41:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:25.092 14:41:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.092 14:41:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.092 14:41:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.092 14:41:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:25.092 14:41:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:25.092 14:41:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:25.092 14:41:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:25.092 14:41:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:25.092 14:41:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:25.092 14:41:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.092 14:41:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.092 14:41:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:25.092 14:41:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:25.092 14:41:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:25.092 14:41:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:25.092 14:41:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:25.092 14:41:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.092 14:41:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:25.092 14:41:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:25.092 14:41:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:25.092 14:41:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:25.092 14:41:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:25.092 14:41:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:25.092 Cannot find device "nvmf_tgt_br" 00:27:25.092 14:41:31 -- nvmf/common.sh@154 -- # true 00:27:25.092 14:41:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:25.092 Cannot find device "nvmf_tgt_br2" 00:27:25.092 14:41:31 -- nvmf/common.sh@155 -- # true 00:27:25.092 14:41:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:25.092 14:41:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:25.092 Cannot find device "nvmf_tgt_br" 00:27:25.092 14:41:31 -- nvmf/common.sh@157 -- # true 00:27:25.092 14:41:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:25.092 Cannot find device "nvmf_tgt_br2" 00:27:25.092 14:41:32 -- nvmf/common.sh@158 -- # true 00:27:25.092 14:41:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:25.092 14:41:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:25.351 14:41:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:25.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.351 14:41:32 -- nvmf/common.sh@161 -- # true 00:27:25.351 14:41:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:25.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.351 14:41:32 -- nvmf/common.sh@162 -- # true 00:27:25.352 14:41:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:25.352 14:41:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:25.352 14:41:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:25.352 14:41:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:25.352 14:41:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:25.352 14:41:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:25.352 14:41:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:25.352 14:41:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:25.352 14:41:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:25.352 14:41:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:25.352 14:41:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:25.352 14:41:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:25.352 14:41:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:25.352 14:41:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:25.352 14:41:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:25.352 14:41:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:25.352 14:41:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:25.352 14:41:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:25.352 14:41:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:25.352 14:41:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:25.352 14:41:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:25.352 14:41:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:25.352 14:41:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:25.352 14:41:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:25.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:27:25.352 00:27:25.352 --- 10.0.0.2 ping statistics --- 00:27:25.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.352 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:25.352 14:41:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:25.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:25.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:27:25.352 00:27:25.352 --- 10.0.0.3 ping statistics --- 00:27:25.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.352 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:25.352 14:41:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:25.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:27:25.352 00:27:25.352 --- 10.0.0.1 ping statistics --- 00:27:25.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.352 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:27:25.352 14:41:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.352 14:41:32 -- nvmf/common.sh@421 -- # return 0 00:27:25.352 14:41:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:25.352 14:41:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.352 14:41:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:25.352 14:41:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:25.352 14:41:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.352 14:41:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:25.352 14:41:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:25.352 14:41:32 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:25.352 14:41:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:25.352 14:41:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.352 14:41:32 -- common/autotest_common.sh@10 -- # set +x 00:27:25.352 14:41:32 -- nvmf/common.sh@469 -- # nvmfpid=88824 00:27:25.352 14:41:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:25.352 14:41:32 -- nvmf/common.sh@470 -- # waitforlisten 88824 00:27:25.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.352 14:41:32 -- common/autotest_common.sh@829 -- # '[' -z 88824 ']' 00:27:25.352 14:41:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.352 14:41:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.352 14:41:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.352 14:41:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.352 14:41:32 -- common/autotest_common.sh@10 -- # set +x 00:27:25.611 [2024-12-06 14:41:32.326406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:25.611 [2024-12-06 14:41:32.327103] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.611 [2024-12-06 14:41:32.463521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:25.869 [2024-12-06 14:41:32.582556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.869 [2024-12-06 14:41:32.583085] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.869 [2024-12-06 14:41:32.583250] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.869 [2024-12-06 14:41:32.583441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.870 [2024-12-06 14:41:32.583888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.870 [2024-12-06 14:41:32.583904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.437 14:41:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.437 14:41:33 -- common/autotest_common.sh@862 -- # return 0 00:27:26.437 14:41:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:26.437 14:41:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.437 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:27:26.696 14:41:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.696 14:41:33 -- host/multipath.sh@33 -- # nvmfapp_pid=88824 00:27:26.696 14:41:33 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:26.955 [2024-12-06 14:41:33.685389] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.955 14:41:33 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:27.214 Malloc0 00:27:27.214 14:41:34 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:27.472 14:41:34 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.735 14:41:34 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.000 [2024-12-06 14:41:34.809144] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.000 14:41:34 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:28.263 [2024-12-06 14:41:35.057338] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:28.263 14:41:35 -- host/multipath.sh@44 -- # bdevperf_pid=88928 00:27:28.263 14:41:35 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:28.263 14:41:35 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:28.263 14:41:35 -- host/multipath.sh@47 -- # waitforlisten 88928 /var/tmp/bdevperf.sock 00:27:28.263 14:41:35 -- common/autotest_common.sh@829 -- # '[' -z 88928 ']' 00:27:28.263 14:41:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:28.263 14:41:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:28.263 14:41:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:28.263 14:41:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.263 14:41:35 -- common/autotest_common.sh@10 -- # set +x 00:27:29.203 14:41:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.203 14:41:36 -- common/autotest_common.sh@862 -- # return 0 00:27:29.203 14:41:36 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:29.461 14:41:36 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:30.027 Nvme0n1 00:27:30.027 14:41:36 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:30.286 Nvme0n1 00:27:30.286 14:41:37 -- host/multipath.sh@78 -- # sleep 1 00:27:30.286 14:41:37 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:31.221 14:41:38 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:31.221 14:41:38 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:31.480 14:41:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:31.739 14:41:38 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:31.739 14:41:38 -- host/multipath.sh@65 -- # dtrace_pid=89015 00:27:31.739 14:41:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:31.739 14:41:38 -- host/multipath.sh@66 -- # sleep 6 00:27:38.301 14:41:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:38.301 14:41:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:38.301 14:41:44 -- host/multipath.sh@67 -- # active_port=4421 00:27:38.301 14:41:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:38.301 Attaching 4 probes... 00:27:38.301 @path[10.0.0.2, 4421]: 19161 00:27:38.301 @path[10.0.0.2, 4421]: 19989 00:27:38.301 @path[10.0.0.2, 4421]: 19469 00:27:38.301 @path[10.0.0.2, 4421]: 19255 00:27:38.301 @path[10.0.0.2, 4421]: 19880 00:27:38.301 14:41:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:38.301 14:41:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:38.301 14:41:44 -- host/multipath.sh@69 -- # sed -n 1p 00:27:38.301 14:41:44 -- host/multipath.sh@69 -- # port=4421 00:27:38.301 14:41:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:38.301 14:41:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:38.301 14:41:44 -- host/multipath.sh@72 -- # kill 89015 00:27:38.301 14:41:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:38.301 14:41:44 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:38.301 14:41:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:38.301 14:41:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:38.558 14:41:45 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:38.558 14:41:45 -- host/multipath.sh@65 -- # dtrace_pid=89152 00:27:38.558 14:41:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:38.558 14:41:45 -- host/multipath.sh@66 -- # sleep 6 00:27:45.188 14:41:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:45.189 14:41:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:45.189 14:41:51 -- host/multipath.sh@67 -- # active_port=4420 00:27:45.189 14:41:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:45.189 Attaching 4 probes... 00:27:45.189 @path[10.0.0.2, 4420]: 19797 00:27:45.189 @path[10.0.0.2, 4420]: 20237 00:27:45.189 @path[10.0.0.2, 4420]: 20308 00:27:45.189 @path[10.0.0.2, 4420]: 20202 00:27:45.189 @path[10.0.0.2, 4420]: 19994 00:27:45.189 14:41:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:45.189 14:41:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:45.189 14:41:51 -- host/multipath.sh@69 -- # sed -n 1p 00:27:45.189 14:41:51 -- host/multipath.sh@69 -- # port=4420 00:27:45.189 14:41:51 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:45.189 14:41:51 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:45.189 14:41:51 -- host/multipath.sh@72 -- # kill 89152 00:27:45.189 14:41:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:45.189 14:41:51 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:45.189 14:41:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:45.189 14:41:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:45.447 14:41:52 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:45.447 14:41:52 -- host/multipath.sh@65 -- # dtrace_pid=89283 00:27:45.447 14:41:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:45.447 14:41:52 -- host/multipath.sh@66 -- # sleep 6 00:27:52.006 14:41:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:52.006 14:41:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:52.006 14:41:58 -- host/multipath.sh@67 -- # active_port=4421 00:27:52.006 14:41:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:52.006 Attaching 4 probes... 00:27:52.006 @path[10.0.0.2, 4421]: 15140 00:27:52.006 @path[10.0.0.2, 4421]: 19360 00:27:52.006 @path[10.0.0.2, 4421]: 19515 00:27:52.006 @path[10.0.0.2, 4421]: 19413 00:27:52.006 @path[10.0.0.2, 4421]: 19762 00:27:52.006 14:41:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:52.006 14:41:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:52.006 14:41:58 -- host/multipath.sh@69 -- # sed -n 1p 00:27:52.006 14:41:58 -- host/multipath.sh@69 -- # port=4421 00:27:52.006 14:41:58 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:52.006 14:41:58 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:52.006 14:41:58 -- host/multipath.sh@72 -- # kill 89283 00:27:52.006 14:41:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:52.006 14:41:58 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:52.006 14:41:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:52.006 14:41:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:52.264 14:41:58 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:52.264 14:41:58 -- host/multipath.sh@65 -- # dtrace_pid=89412 00:27:52.264 14:41:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:52.264 14:41:58 -- host/multipath.sh@66 -- # sleep 6 00:27:58.829 14:42:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:58.829 14:42:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:58.829 14:42:05 -- host/multipath.sh@67 -- # active_port= 00:27:58.829 14:42:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:58.829 Attaching 4 probes... 00:27:58.829 00:27:58.829 00:27:58.829 00:27:58.829 00:27:58.829 00:27:58.829 14:42:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:58.829 14:42:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:58.829 14:42:05 -- host/multipath.sh@69 -- # sed -n 1p 00:27:58.829 14:42:05 -- host/multipath.sh@69 -- # port= 00:27:58.829 14:42:05 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:58.829 14:42:05 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:58.829 14:42:05 -- host/multipath.sh@72 -- # kill 89412 00:27:58.829 14:42:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:58.829 14:42:05 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:58.829 14:42:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:58.829 14:42:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:59.086 14:42:05 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:59.086 14:42:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:59.086 14:42:05 -- host/multipath.sh@65 -- # dtrace_pid=89544 00:27:59.086 14:42:05 -- host/multipath.sh@66 -- # sleep 6 00:28:05.648 14:42:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:05.648 14:42:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:05.648 14:42:12 -- host/multipath.sh@67 -- # active_port=4421 00:28:05.648 14:42:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:05.648 Attaching 4 probes... 00:28:05.648 @path[10.0.0.2, 4421]: 18899 00:28:05.648 @path[10.0.0.2, 4421]: 19155 00:28:05.648 @path[10.0.0.2, 4421]: 19141 00:28:05.648 @path[10.0.0.2, 4421]: 19148 00:28:05.648 @path[10.0.0.2, 4421]: 19112 00:28:05.648 14:42:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:05.648 14:42:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:05.648 14:42:12 -- host/multipath.sh@69 -- # sed -n 1p 00:28:05.648 14:42:12 -- host/multipath.sh@69 -- # port=4421 00:28:05.648 14:42:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:05.648 14:42:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:05.648 14:42:12 -- host/multipath.sh@72 -- # kill 89544 00:28:05.648 14:42:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:05.648 14:42:12 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:05.648 [2024-12-06 14:42:12.332948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 [2024-12-06 14:42:12.333453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1800 is same with the state(5) to be set 00:28:05.648 14:42:12 -- host/multipath.sh@101 -- # sleep 1 00:28:06.583 14:42:13 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:06.583 14:42:13 -- host/multipath.sh@65 -- # dtrace_pid=89674 00:28:06.583 14:42:13 -- host/multipath.sh@66 -- # sleep 6 00:28:06.583 14:42:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:13.164 14:42:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:13.165 14:42:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:13.165 14:42:19 -- host/multipath.sh@67 -- # active_port=4420 00:28:13.165 14:42:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.165 Attaching 4 probes... 00:28:13.165 @path[10.0.0.2, 4420]: 20731 00:28:13.165 @path[10.0.0.2, 4420]: 21044 00:28:13.165 @path[10.0.0.2, 4420]: 20973 00:28:13.165 @path[10.0.0.2, 4420]: 20925 00:28:13.165 @path[10.0.0.2, 4420]: 20895 00:28:13.165 14:42:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:13.165 14:42:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:13.165 14:42:19 -- host/multipath.sh@69 -- # sed -n 1p 00:28:13.165 14:42:19 -- host/multipath.sh@69 -- # port=4420 00:28:13.165 14:42:19 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:13.165 14:42:19 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:13.165 14:42:19 -- host/multipath.sh@72 -- # kill 89674 00:28:13.165 14:42:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.165 14:42:19 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:13.165 [2024-12-06 14:42:19.838337] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.165 14:42:19 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:13.422 14:42:20 -- host/multipath.sh@111 -- # sleep 6 00:28:19.990 14:42:26 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:19.990 14:42:26 -- host/multipath.sh@65 -- # dtrace_pid=89867 00:28:19.990 14:42:26 -- host/multipath.sh@66 -- # sleep 6 00:28:19.990 14:42:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:25.353 14:42:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:25.353 14:42:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:25.611 14:42:32 -- host/multipath.sh@67 -- # active_port=4421 00:28:25.611 14:42:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:25.611 Attaching 4 probes... 00:28:25.611 @path[10.0.0.2, 4421]: 17914 00:28:25.611 @path[10.0.0.2, 4421]: 18260 00:28:25.611 @path[10.0.0.2, 4421]: 18073 00:28:25.611 @path[10.0.0.2, 4421]: 18155 00:28:25.611 @path[10.0.0.2, 4421]: 17863 00:28:25.611 14:42:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:25.611 14:42:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:25.611 14:42:32 -- host/multipath.sh@69 -- # sed -n 1p 00:28:25.611 14:42:32 -- host/multipath.sh@69 -- # port=4421 00:28:25.611 14:42:32 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:25.611 14:42:32 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:25.611 14:42:32 -- host/multipath.sh@72 -- # kill 89867 00:28:25.611 14:42:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:25.611 14:42:32 -- host/multipath.sh@114 -- # killprocess 88928 00:28:25.611 14:42:32 -- common/autotest_common.sh@936 -- # '[' -z 88928 ']' 00:28:25.611 14:42:32 -- common/autotest_common.sh@940 -- # kill -0 88928 00:28:25.611 14:42:32 -- common/autotest_common.sh@941 -- # uname 00:28:25.611 14:42:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:25.611 14:42:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88928 00:28:25.611 killing process with pid 88928 00:28:25.611 14:42:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:25.611 14:42:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:25.611 14:42:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88928' 00:28:25.611 14:42:32 -- common/autotest_common.sh@955 -- # kill 88928 00:28:25.611 14:42:32 -- common/autotest_common.sh@960 -- # wait 88928 00:28:25.869 Connection closed with partial response: 00:28:25.869 00:28:25.869 00:28:26.133 14:42:32 -- host/multipath.sh@116 -- # wait 88928 00:28:26.133 14:42:32 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:26.133 [2024-12-06 14:41:35.144685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:26.133 [2024-12-06 14:41:35.144823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88928 ] 00:28:26.133 [2024-12-06 14:41:35.283585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.133 [2024-12-06 14:41:35.412084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.133 Running I/O for 90 seconds... 00:28:26.133 [2024-12-06 14:41:45.428508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.428565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.428646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.428685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.428721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.428756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.428821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.428898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.428918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.428931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.429329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.429984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.429998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.133 [2024-12-06 14:41:45.430805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.133 [2024-12-06 14:41:45.430865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.133 [2024-12-06 14:41:45.430886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.434352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.434396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.434791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.434870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.434952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.434965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.435217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.435255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.435288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.435320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.435352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.435384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.435416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.435432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.438167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.438276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.438308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.438339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.438403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.438533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.438751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.438765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.439628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.439660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.439679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.439693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.441738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.441776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.441823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.441860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.441896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.441930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.441964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.441985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.441999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.442019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.442033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.442054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.442068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.442089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.442103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.442123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.442137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.442157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.442172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.442204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.442229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.444210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.444264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.444297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.444331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.444364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:45.444397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.444479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:45.444499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:45.444514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:51.958303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:51.958377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:51.958457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:51.958495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:51.958530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:51.958588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:51.958622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:51.958657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.134 [2024-12-06 14:41:51.958690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.134 [2024-12-06 14:41:51.958710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.134 [2024-12-06 14:41:51.958724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.958758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.958843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.958874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.958905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.958937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.958970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.958988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.959001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.959042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.959073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.959104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.959136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.959167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.959198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.959229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.959248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.959261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.960699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.960956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.960978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.961258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.961301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.961340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.961501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.961540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.961580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.961953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.962893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.962971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.962995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.963010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.963048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.963087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.963125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.963165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.963203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.135 [2024-12-06 14:41:51.963248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.135 [2024-12-06 14:41:51.963275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.135 [2024-12-06 14:41:51.963289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:51.963314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:51.963328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:51.963352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:51.963367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:51.963391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:51.963406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.971792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.971878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.971932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.971953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.971996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.972030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.972098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.972132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.972405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.972520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.972967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.972981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.973083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.973151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.973533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.973577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.973943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.973958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.974741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.974966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.974980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.975942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.975966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.975981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.976005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.136 [2024-12-06 14:41:58.976020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.136 [2024-12-06 14:41:58.976044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.136 [2024-12-06 14:41:58.976059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.976904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.976968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.976982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.977019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.977057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:41:58.977095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.977144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.977187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.977226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:41:58.977250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:41:58.977264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.334765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.334987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.334998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.137 [2024-12-06 14:42:12.335851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.137 [2024-12-06 14:42:12.335880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.137 [2024-12-06 14:42:12.335892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.335906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.335918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.335931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.335944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.335957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.335969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.335983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.335995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.336704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.336977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.336990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.337002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.337051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.337102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.337218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.337293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.138 [2024-12-06 14:42:12.337375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.138 [2024-12-06 14:42:12.337862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.337876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235f5b0 is same with the state(5) to be set 00:28:26.138 [2024-12-06 14:42:12.337892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.138 [2024-12-06 14:42:12.337902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.138 [2024-12-06 14:42:12.337912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:28:26.138 [2024-12-06 14:42:12.337931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.338034] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235f5b0 was disconnected and freed. reset controller. 00:28:26.138 [2024-12-06 14:42:12.338153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.138 [2024-12-06 14:42:12.338177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.338191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.138 [2024-12-06 14:42:12.338203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.338222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.138 [2024-12-06 14:42:12.338234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.338247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.138 [2024-12-06 14:42:12.338259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.138 [2024-12-06 14:42:12.338270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2503790 is same with the state(5) to be set 00:28:26.138 [2024-12-06 14:42:12.339467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.138 [2024-12-06 14:42:12.339509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2503790 (9): Bad file descriptor 00:28:26.138 [2024-12-06 14:42:12.339611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.138 [2024-12-06 14:42:12.339664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.138 [2024-12-06 14:42:12.339685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2503790 with addr=10.0.0.2, port=4421 00:28:26.138 [2024-12-06 14:42:12.339699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2503790 is same with the state(5) to be set 00:28:26.138 [2024-12-06 14:42:12.339722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2503790 (9): Bad file descriptor 00:28:26.138 [2024-12-06 14:42:12.339752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.138 [2024-12-06 14:42:12.339765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.138 [2024-12-06 14:42:12.339779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.138 [2024-12-06 14:42:12.339817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.138 [2024-12-06 14:42:12.339830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.138 [2024-12-06 14:42:22.402876] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:26.138 Received shutdown signal, test time was about 55.251699 seconds 00:28:26.138 00:28:26.138 Latency(us) 00:28:26.138 [2024-12-06T14:42:33.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.138 [2024-12-06T14:42:33.108Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:26.138 Verification LBA range: start 0x0 length 0x4000 00:28:26.138 Nvme0n1 : 55.25 11269.01 44.02 0.00 0.00 11341.34 923.46 7015926.69 00:28:26.138 [2024-12-06T14:42:33.108Z] =================================================================================================================== 00:28:26.138 [2024-12-06T14:42:33.108Z] Total : 11269.01 44.02 0.00 0.00 11341.34 923.46 7015926.69 00:28:26.138 14:42:32 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.395 14:42:33 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:26.395 14:42:33 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:26.395 14:42:33 -- host/multipath.sh@125 -- # nvmftestfini 00:28:26.395 14:42:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:26.395 14:42:33 -- nvmf/common.sh@116 -- # sync 00:28:26.395 14:42:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:26.395 14:42:33 -- nvmf/common.sh@119 -- # set +e 00:28:26.395 14:42:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:26.395 14:42:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:26.395 rmmod nvme_tcp 00:28:26.395 rmmod nvme_fabrics 00:28:26.395 rmmod nvme_keyring 00:28:26.395 14:42:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:26.395 14:42:33 -- nvmf/common.sh@123 -- # set -e 00:28:26.395 14:42:33 -- nvmf/common.sh@124 -- # return 0 00:28:26.395 14:42:33 -- nvmf/common.sh@477 -- # '[' -n 88824 ']' 00:28:26.395 14:42:33 -- nvmf/common.sh@478 -- # killprocess 88824 00:28:26.395 14:42:33 -- common/autotest_common.sh@936 -- # '[' -z 88824 ']' 00:28:26.395 14:42:33 -- common/autotest_common.sh@940 -- # kill -0 88824 00:28:26.395 14:42:33 -- common/autotest_common.sh@941 -- # uname 00:28:26.395 14:42:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:26.395 14:42:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88824 00:28:26.395 killing process with pid 88824 00:28:26.395 14:42:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:26.395 14:42:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:26.395 14:42:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88824' 00:28:26.395 14:42:33 -- common/autotest_common.sh@955 -- # kill 88824 00:28:26.395 14:42:33 -- common/autotest_common.sh@960 -- # wait 88824 00:28:26.959 14:42:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:26.959 14:42:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:26.959 14:42:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:26.959 14:42:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.959 14:42:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:26.959 14:42:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.959 14:42:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.959 14:42:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.959 14:42:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:26.959 00:28:26.959 real 1m2.026s 00:28:26.959 user 2m53.326s 00:28:26.959 sys 0m14.941s 00:28:26.959 14:42:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:26.959 ************************************ 00:28:26.959 END TEST nvmf_multipath 00:28:26.959 ************************************ 00:28:26.959 14:42:33 -- common/autotest_common.sh@10 -- # set +x 00:28:26.959 14:42:33 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:26.959 14:42:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:26.959 14:42:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:26.959 14:42:33 -- common/autotest_common.sh@10 -- # set +x 00:28:26.959 ************************************ 00:28:26.959 START TEST nvmf_timeout 00:28:26.959 ************************************ 00:28:26.959 14:42:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:26.959 * Looking for test storage... 00:28:26.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:26.959 14:42:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:26.959 14:42:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:26.959 14:42:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:27.216 14:42:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:27.216 14:42:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:27.216 14:42:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:27.216 14:42:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:27.216 14:42:34 -- scripts/common.sh@335 -- # IFS=.-: 00:28:27.216 14:42:34 -- scripts/common.sh@335 -- # read -ra ver1 00:28:27.216 14:42:34 -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.216 14:42:34 -- scripts/common.sh@336 -- # read -ra ver2 00:28:27.216 14:42:34 -- scripts/common.sh@337 -- # local 'op=<' 00:28:27.216 14:42:34 -- scripts/common.sh@339 -- # ver1_l=2 00:28:27.216 14:42:34 -- scripts/common.sh@340 -- # ver2_l=1 00:28:27.216 14:42:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:27.216 14:42:34 -- scripts/common.sh@343 -- # case "$op" in 00:28:27.216 14:42:34 -- scripts/common.sh@344 -- # : 1 00:28:27.216 14:42:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:27.216 14:42:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.216 14:42:34 -- scripts/common.sh@364 -- # decimal 1 00:28:27.216 14:42:34 -- scripts/common.sh@352 -- # local d=1 00:28:27.216 14:42:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.216 14:42:34 -- scripts/common.sh@354 -- # echo 1 00:28:27.216 14:42:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:27.216 14:42:34 -- scripts/common.sh@365 -- # decimal 2 00:28:27.216 14:42:34 -- scripts/common.sh@352 -- # local d=2 00:28:27.216 14:42:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.216 14:42:34 -- scripts/common.sh@354 -- # echo 2 00:28:27.216 14:42:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:27.216 14:42:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:27.216 14:42:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:27.216 14:42:34 -- scripts/common.sh@367 -- # return 0 00:28:27.216 14:42:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.216 14:42:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:27.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.216 --rc genhtml_branch_coverage=1 00:28:27.216 --rc genhtml_function_coverage=1 00:28:27.216 --rc genhtml_legend=1 00:28:27.216 --rc geninfo_all_blocks=1 00:28:27.216 --rc geninfo_unexecuted_blocks=1 00:28:27.216 00:28:27.216 ' 00:28:27.216 14:42:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:27.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.216 --rc genhtml_branch_coverage=1 00:28:27.216 --rc genhtml_function_coverage=1 00:28:27.216 --rc genhtml_legend=1 00:28:27.217 --rc geninfo_all_blocks=1 00:28:27.217 --rc geninfo_unexecuted_blocks=1 00:28:27.217 00:28:27.217 ' 00:28:27.217 14:42:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:27.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.217 --rc genhtml_branch_coverage=1 00:28:27.217 --rc genhtml_function_coverage=1 00:28:27.217 --rc genhtml_legend=1 00:28:27.217 --rc geninfo_all_blocks=1 00:28:27.217 --rc geninfo_unexecuted_blocks=1 00:28:27.217 00:28:27.217 ' 00:28:27.217 14:42:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:27.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.217 --rc genhtml_branch_coverage=1 00:28:27.217 --rc genhtml_function_coverage=1 00:28:27.217 --rc genhtml_legend=1 00:28:27.217 --rc geninfo_all_blocks=1 00:28:27.217 --rc geninfo_unexecuted_blocks=1 00:28:27.217 00:28:27.217 ' 00:28:27.217 14:42:34 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:27.217 14:42:34 -- nvmf/common.sh@7 -- # uname -s 00:28:27.217 14:42:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.217 14:42:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.217 14:42:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.217 14:42:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.217 14:42:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.217 14:42:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.217 14:42:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.217 14:42:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.217 14:42:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.217 14:42:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.217 14:42:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:28:27.217 14:42:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:28:27.217 14:42:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.217 14:42:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.217 14:42:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:27.217 14:42:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.217 14:42:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.217 14:42:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.217 14:42:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.217 14:42:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.217 14:42:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.217 14:42:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.217 14:42:34 -- paths/export.sh@5 -- # export PATH 00:28:27.217 14:42:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.217 14:42:34 -- nvmf/common.sh@46 -- # : 0 00:28:27.217 14:42:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.217 14:42:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.217 14:42:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.217 14:42:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.217 14:42:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.217 14:42:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.217 14:42:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.217 14:42:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.217 14:42:34 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:27.217 14:42:34 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:27.217 14:42:34 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:27.217 14:42:34 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:27.217 14:42:34 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:27.217 14:42:34 -- host/timeout.sh@19 -- # nvmftestinit 00:28:27.217 14:42:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:27.217 14:42:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.217 14:42:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:27.217 14:42:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:27.217 14:42:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:27.217 14:42:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.217 14:42:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.217 14:42:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.217 14:42:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:27.217 14:42:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:27.217 14:42:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:27.217 14:42:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:27.217 14:42:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:27.217 14:42:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:27.217 14:42:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.217 14:42:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.217 14:42:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:27.217 14:42:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:27.217 14:42:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:27.217 14:42:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:27.217 14:42:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:27.217 14:42:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.217 14:42:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:27.217 14:42:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:27.217 14:42:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:27.217 14:42:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:27.217 14:42:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:27.217 14:42:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:27.217 Cannot find device "nvmf_tgt_br" 00:28:27.217 14:42:34 -- nvmf/common.sh@154 -- # true 00:28:27.217 14:42:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:27.217 Cannot find device "nvmf_tgt_br2" 00:28:27.217 14:42:34 -- nvmf/common.sh@155 -- # true 00:28:27.217 14:42:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:27.217 14:42:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:27.217 Cannot find device "nvmf_tgt_br" 00:28:27.217 14:42:34 -- nvmf/common.sh@157 -- # true 00:28:27.217 14:42:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:27.217 Cannot find device "nvmf_tgt_br2" 00:28:27.217 14:42:34 -- nvmf/common.sh@158 -- # true 00:28:27.217 14:42:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:27.217 14:42:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:27.475 14:42:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:27.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.475 14:42:34 -- nvmf/common.sh@161 -- # true 00:28:27.475 14:42:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:27.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.475 14:42:34 -- nvmf/common.sh@162 -- # true 00:28:27.475 14:42:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:27.475 14:42:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:27.475 14:42:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:27.475 14:42:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:27.475 14:42:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:27.475 14:42:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:27.475 14:42:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:27.475 14:42:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:27.475 14:42:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:27.475 14:42:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:27.475 14:42:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:27.475 14:42:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:27.475 14:42:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:27.475 14:42:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:27.475 14:42:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:27.475 14:42:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:27.475 14:42:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:27.475 14:42:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:27.475 14:42:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:27.475 14:42:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:27.475 14:42:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:27.475 14:42:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:27.475 14:42:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:27.475 14:42:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:27.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:28:27.475 00:28:27.475 --- 10.0.0.2 ping statistics --- 00:28:27.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.475 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:28:27.475 14:42:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:27.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:27.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:28:27.475 00:28:27.475 --- 10.0.0.3 ping statistics --- 00:28:27.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.475 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:27.475 14:42:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:27.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:28:27.475 00:28:27.475 --- 10.0.0.1 ping statistics --- 00:28:27.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.475 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:28:27.475 14:42:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.475 14:42:34 -- nvmf/common.sh@421 -- # return 0 00:28:27.475 14:42:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:27.475 14:42:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.475 14:42:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:27.475 14:42:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:27.475 14:42:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.475 14:42:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:27.475 14:42:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:27.475 14:42:34 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:27.475 14:42:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:27.475 14:42:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:27.475 14:42:34 -- common/autotest_common.sh@10 -- # set +x 00:28:27.475 14:42:34 -- nvmf/common.sh@469 -- # nvmfpid=90203 00:28:27.475 14:42:34 -- nvmf/common.sh@470 -- # waitforlisten 90203 00:28:27.475 14:42:34 -- common/autotest_common.sh@829 -- # '[' -z 90203 ']' 00:28:27.475 14:42:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:27.475 14:42:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.475 14:42:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.475 14:42:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.475 14:42:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.475 14:42:34 -- common/autotest_common.sh@10 -- # set +x 00:28:27.734 [2024-12-06 14:42:34.482706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:27.734 [2024-12-06 14:42:34.482771] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.734 [2024-12-06 14:42:34.618857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:27.992 [2024-12-06 14:42:34.742273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:27.992 [2024-12-06 14:42:34.742481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.992 [2024-12-06 14:42:34.742500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.992 [2024-12-06 14:42:34.742511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.992 [2024-12-06 14:42:34.742613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.992 [2024-12-06 14:42:34.743180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.559 14:42:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.559 14:42:35 -- common/autotest_common.sh@862 -- # return 0 00:28:28.559 14:42:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:28.559 14:42:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:28.559 14:42:35 -- common/autotest_common.sh@10 -- # set +x 00:28:28.559 14:42:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.559 14:42:35 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:28.559 14:42:35 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:29.124 [2024-12-06 14:42:35.793778] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.124 14:42:35 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:29.382 Malloc0 00:28:29.382 14:42:36 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.639 14:42:36 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.896 14:42:36 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.154 [2024-12-06 14:42:36.887005] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.154 14:42:36 -- host/timeout.sh@32 -- # bdevperf_pid=90294 00:28:30.154 14:42:36 -- host/timeout.sh@34 -- # waitforlisten 90294 /var/tmp/bdevperf.sock 00:28:30.154 14:42:36 -- common/autotest_common.sh@829 -- # '[' -z 90294 ']' 00:28:30.154 14:42:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.154 14:42:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.154 14:42:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.154 14:42:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.154 14:42:36 -- common/autotest_common.sh@10 -- # set +x 00:28:30.154 14:42:36 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:30.154 [2024-12-06 14:42:36.968858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:30.154 [2024-12-06 14:42:36.968953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90294 ] 00:28:30.154 [2024-12-06 14:42:37.108945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.413 [2024-12-06 14:42:37.229808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.979 14:42:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.979 14:42:37 -- common/autotest_common.sh@862 -- # return 0 00:28:30.979 14:42:37 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:31.238 14:42:38 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:31.805 NVMe0n1 00:28:31.805 14:42:38 -- host/timeout.sh@51 -- # rpc_pid=90343 00:28:31.805 14:42:38 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:31.805 14:42:38 -- host/timeout.sh@53 -- # sleep 1 00:28:31.805 Running I/O for 10 seconds... 00:28:32.742 14:42:39 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.003 [2024-12-06 14:42:39.754785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.754994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.755002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.755012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.755020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.755028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.755037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.003 [2024-12-06 14:42:39.755045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efa40 is same with the state(5) to be set 00:28:33.004 [2024-12-06 14:42:39.755786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.755917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.755943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.755969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.755979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.755988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.755998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.004 [2024-12-06 14:42:39.756361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.004 [2024-12-06 14:42:39.756371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.004 [2024-12-06 14:42:39.756379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.756957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.756986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.756995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.757005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.757023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.757032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.757042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.005 [2024-12-06 14:42:39.757051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.757061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.005 [2024-12-06 14:42:39.757070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.005 [2024-12-06 14:42:39.757080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.006 [2024-12-06 14:42:39.757838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.006 [2024-12-06 14:42:39.757880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.006 [2024-12-06 14:42:39.757891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.757900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.757917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.757926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.757943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.757958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.757970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.757979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.757991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.007 [2024-12-06 14:42:39.758474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.007 [2024-12-06 14:42:39.758639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4d050 is same with the state(5) to be set 00:28:33.007 [2024-12-06 14:42:39.758676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.007 [2024-12-06 14:42:39.758691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.007 [2024-12-06 14:42:39.758699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114720 len:8 PRP1 0x0 PRP2 0x0 00:28:33.007 [2024-12-06 14:42:39.758708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.007 [2024-12-06 14:42:39.758785] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b4d050 was disconnected and freed. reset controller. 00:28:33.007 [2024-12-06 14:42:39.759011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.008 [2024-12-06 14:42:39.759089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad7dc0 (9): Bad file descriptor 00:28:33.008 [2024-12-06 14:42:39.759200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-12-06 14:42:39.759244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-12-06 14:42:39.759259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad7dc0 with addr=10.0.0.2, port=4420 00:28:33.008 [2024-12-06 14:42:39.759276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad7dc0 is same with the state(5) to be set 00:28:33.008 [2024-12-06 14:42:39.759293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad7dc0 (9): Bad file descriptor 00:28:33.008 [2024-12-06 14:42:39.759308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.008 [2024-12-06 14:42:39.759317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.008 [2024-12-06 14:42:39.759327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.008 [2024-12-06 14:42:39.759346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.008 [2024-12-06 14:42:39.759356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.008 14:42:39 -- host/timeout.sh@56 -- # sleep 2 00:28:34.912 [2024-12-06 14:42:41.759473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.912 [2024-12-06 14:42:41.759565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.912 [2024-12-06 14:42:41.759582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad7dc0 with addr=10.0.0.2, port=4420 00:28:34.912 [2024-12-06 14:42:41.759595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad7dc0 is same with the state(5) to be set 00:28:34.912 [2024-12-06 14:42:41.759619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad7dc0 (9): Bad file descriptor 00:28:34.912 [2024-12-06 14:42:41.759638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.912 [2024-12-06 14:42:41.759648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.912 [2024-12-06 14:42:41.759658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.912 [2024-12-06 14:42:41.759683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.912 [2024-12-06 14:42:41.759695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.912 14:42:41 -- host/timeout.sh@57 -- # get_controller 00:28:34.912 14:42:41 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:34.912 14:42:41 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:35.170 14:42:42 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:35.170 14:42:42 -- host/timeout.sh@58 -- # get_bdev 00:28:35.170 14:42:42 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:35.170 14:42:42 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:35.429 14:42:42 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:35.429 14:42:42 -- host/timeout.sh@61 -- # sleep 5 00:28:36.804 [2024-12-06 14:42:43.759781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.804 [2024-12-06 14:42:43.759862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.804 [2024-12-06 14:42:43.759889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad7dc0 with addr=10.0.0.2, port=4420 00:28:36.804 [2024-12-06 14:42:43.759899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad7dc0 is same with the state(5) to be set 00:28:36.804 [2024-12-06 14:42:43.759933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad7dc0 (9): Bad file descriptor 00:28:36.804 [2024-12-06 14:42:43.759961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:36.804 [2024-12-06 14:42:43.759971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:36.804 [2024-12-06 14:42:43.759980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:36.804 [2024-12-06 14:42:43.759998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:36.804 [2024-12-06 14:42:43.760007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.335 [2024-12-06 14:42:45.760055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.335 [2024-12-06 14:42:45.760103] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.335 [2024-12-06 14:42:45.760113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.335 [2024-12-06 14:42:45.760121] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:39.335 [2024-12-06 14:42:45.760139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.902 00:28:39.902 Latency(us) 00:28:39.902 [2024-12-06T14:42:46.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.902 [2024-12-06T14:42:46.872Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:39.902 Verification LBA range: start 0x0 length 0x4000 00:28:39.902 NVMe0n1 : 8.19 1745.30 6.82 15.63 0.00 72584.18 2815.07 7015926.69 00:28:39.902 [2024-12-06T14:42:46.872Z] =================================================================================================================== 00:28:39.902 [2024-12-06T14:42:46.873Z] Total : 1745.30 6.82 15.63 0.00 72584.18 2815.07 7015926.69 00:28:39.903 0 00:28:40.470 14:42:47 -- host/timeout.sh@62 -- # get_controller 00:28:40.470 14:42:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:40.470 14:42:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:40.728 14:42:47 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:40.728 14:42:47 -- host/timeout.sh@63 -- # get_bdev 00:28:40.728 14:42:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:40.728 14:42:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:40.987 14:42:47 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:40.987 14:42:47 -- host/timeout.sh@65 -- # wait 90343 00:28:40.987 14:42:47 -- host/timeout.sh@67 -- # killprocess 90294 00:28:40.987 14:42:47 -- common/autotest_common.sh@936 -- # '[' -z 90294 ']' 00:28:40.987 14:42:47 -- common/autotest_common.sh@940 -- # kill -0 90294 00:28:40.987 14:42:47 -- common/autotest_common.sh@941 -- # uname 00:28:40.987 14:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:40.987 14:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90294 00:28:40.987 14:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:40.987 killing process with pid 90294 00:28:40.987 Received shutdown signal, test time was about 9.311554 seconds 00:28:40.987 00:28:40.987 Latency(us) 00:28:40.987 [2024-12-06T14:42:47.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.987 [2024-12-06T14:42:47.957Z] =================================================================================================================== 00:28:40.987 [2024-12-06T14:42:47.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.987 14:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:40.987 14:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90294' 00:28:40.987 14:42:47 -- common/autotest_common.sh@955 -- # kill 90294 00:28:40.987 14:42:47 -- common/autotest_common.sh@960 -- # wait 90294 00:28:41.554 14:42:48 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.819 [2024-12-06 14:42:48.525039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:41.819 14:42:48 -- host/timeout.sh@74 -- # bdevperf_pid=90502 00:28:41.819 14:42:48 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:41.819 14:42:48 -- host/timeout.sh@76 -- # waitforlisten 90502 /var/tmp/bdevperf.sock 00:28:41.819 14:42:48 -- common/autotest_common.sh@829 -- # '[' -z 90502 ']' 00:28:41.819 14:42:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:41.819 14:42:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.819 14:42:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:41.819 14:42:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.819 14:42:48 -- common/autotest_common.sh@10 -- # set +x 00:28:41.819 [2024-12-06 14:42:48.587498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:41.819 [2024-12-06 14:42:48.587583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90502 ] 00:28:41.819 [2024-12-06 14:42:48.721018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.101 [2024-12-06 14:42:48.843169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.675 14:42:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.675 14:42:49 -- common/autotest_common.sh@862 -- # return 0 00:28:42.675 14:42:49 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:42.934 14:42:49 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:43.193 NVMe0n1 00:28:43.193 14:42:50 -- host/timeout.sh@84 -- # rpc_pid=90544 00:28:43.193 14:42:50 -- host/timeout.sh@86 -- # sleep 1 00:28:43.193 14:42:50 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:43.451 Running I/O for 10 seconds... 00:28:44.385 14:42:51 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.648 [2024-12-06 14:42:51.357849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.648 [2024-12-06 14:42:51.358626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.358720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dcb70 is same with the state(5) to be set 00:28:44.649 [2024-12-06 14:42:51.359264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.649 [2024-12-06 14:42:51.359759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.649 [2024-12-06 14:42:51.359769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.359792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.359802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.359826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.359852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.359862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.359873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.359882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.359892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.359900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.359911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.360739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.360780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.360868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.360939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.360957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.360976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.360985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.360994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.361012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.361031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.650 [2024-12-06 14:42:51.361162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.650 [2024-12-06 14:42:51.361171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.650 [2024-12-06 14:42:51.361180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.651 [2024-12-06 14:42:51.361198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.651 [2024-12-06 14:42:51.361216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.651 [2024-12-06 14:42:51.361597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.651 [2024-12-06 14:42:51.361615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.651 [2024-12-06 14:42:51.361634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.361831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.361840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.362155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.651 [2024-12-06 14:42:51.362166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.651 [2024-12-06 14:42:51.362177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.652 [2024-12-06 14:42:51.362707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.652 [2024-12-06 14:42:51.362749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.652 [2024-12-06 14:42:51.362759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.653 [2024-12-06 14:42:51.362786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.653 [2024-12-06 14:42:51.362823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.653 [2024-12-06 14:42:51.362840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.653 [2024-12-06 14:42:51.362859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.653 [2024-12-06 14:42:51.362903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.362983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.362992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.363001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.363010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.363019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.653 [2024-12-06 14:42:51.363037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.363046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd1050 is same with the state(5) to be set 00:28:44.653 [2024-12-06 14:42:51.363064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.653 [2024-12-06 14:42:51.363071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.653 [2024-12-06 14:42:51.363079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120256 len:8 PRP1 0x0 PRP2 0x0 00:28:44.653 [2024-12-06 14:42:51.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.653 [2024-12-06 14:42:51.363159] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bd1050 was disconnected and freed. reset controller. 00:28:44.653 [2024-12-06 14:42:51.363398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.653 [2024-12-06 14:42:51.363526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:44.653 [2024-12-06 14:42:51.363637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.653 [2024-12-06 14:42:51.363683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.653 [2024-12-06 14:42:51.363699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5bdc0 with addr=10.0.0.2, port=4420 00:28:44.653 [2024-12-06 14:42:51.363709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:44.653 [2024-12-06 14:42:51.363727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:44.653 [2024-12-06 14:42:51.363742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.653 [2024-12-06 14:42:51.363752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.653 [2024-12-06 14:42:51.363762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.653 [2024-12-06 14:42:51.363797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.653 [2024-12-06 14:42:51.363808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.653 14:42:51 -- host/timeout.sh@90 -- # sleep 1 00:28:45.586 [2024-12-06 14:42:52.363900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.587 [2024-12-06 14:42:52.363981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.587 [2024-12-06 14:42:52.363999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5bdc0 with addr=10.0.0.2, port=4420 00:28:45.587 [2024-12-06 14:42:52.364009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:45.587 [2024-12-06 14:42:52.364027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:45.587 [2024-12-06 14:42:52.364043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.587 [2024-12-06 14:42:52.364051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.587 [2024-12-06 14:42:52.364060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.587 [2024-12-06 14:42:52.364078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.587 [2024-12-06 14:42:52.364088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.587 14:42:52 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.844 [2024-12-06 14:42:52.627760] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.844 14:42:52 -- host/timeout.sh@92 -- # wait 90544 00:28:46.779 [2024-12-06 14:42:53.383109] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:53.339 00:28:53.339 Latency(us) 00:28:53.339 [2024-12-06T14:43:00.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.339 [2024-12-06T14:43:00.309Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:53.339 Verification LBA range: start 0x0 length 0x4000 00:28:53.339 NVMe0n1 : 10.01 9684.97 37.83 0.00 0.00 13192.31 1765.00 3019898.88 00:28:53.339 [2024-12-06T14:43:00.309Z] =================================================================================================================== 00:28:53.339 [2024-12-06T14:43:00.309Z] Total : 9684.97 37.83 0.00 0.00 13192.31 1765.00 3019898.88 00:28:53.339 0 00:28:53.339 14:43:00 -- host/timeout.sh@97 -- # rpc_pid=90665 00:28:53.339 14:43:00 -- host/timeout.sh@98 -- # sleep 1 00:28:53.339 14:43:00 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:53.598 Running I/O for 10 seconds... 00:28:54.533 14:43:01 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.793 [2024-12-06 14:43:01.507687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.793 [2024-12-06 14:43:01.507971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.507980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.507988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.507996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839c70 is same with the state(5) to be set 00:28:54.794 [2024-12-06 14:43:01.508995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.794 [2024-12-06 14:43:01.509223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.794 [2024-12-06 14:43:01.509233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.509984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.509996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.795 [2024-12-06 14:43:01.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.795 [2024-12-06 14:43:01.510318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.796 [2024-12-06 14:43:01.510815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.796 [2024-12-06 14:43:01.510980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.796 [2024-12-06 14:43:01.510991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.510999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.511299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.511391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.511401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.512395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.513150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.513884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.514319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.514623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.515234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.515337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.515358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.515379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.515398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.515454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.515474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.797 [2024-12-06 14:43:01.515492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.797 [2024-12-06 14:43:01.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.797 [2024-12-06 14:43:01.515522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.798 [2024-12-06 14:43:01.515530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.798 [2024-12-06 14:43:01.515550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.798 [2024-12-06 14:43:01.515588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.798 [2024-12-06 14:43:01.515607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.798 [2024-12-06 14:43:01.515737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bccf90 is same with the state(5) to be set 00:28:54.798 [2024-12-06 14:43:01.515765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:54.798 [2024-12-06 14:43:01.515772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:54.798 [2024-12-06 14:43:01.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114104 len:8 PRP1 0x0 PRP2 0x0 00:28:54.798 [2024-12-06 14:43:01.515788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515868] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bccf90 was disconnected and freed. reset controller. 00:28:54.798 [2024-12-06 14:43:01.515968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.798 [2024-12-06 14:43:01.515983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.515994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.798 [2024-12-06 14:43:01.516003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.516257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.798 [2024-12-06 14:43:01.516276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.516287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.798 [2024-12-06 14:43:01.516295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.798 [2024-12-06 14:43:01.516305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:54.798 [2024-12-06 14:43:01.516672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.798 [2024-12-06 14:43:01.516706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:54.798 [2024-12-06 14:43:01.517001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.798 [2024-12-06 14:43:01.517062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.798 [2024-12-06 14:43:01.517079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5bdc0 with addr=10.0.0.2, port=4420 00:28:54.798 [2024-12-06 14:43:01.517089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:54.798 [2024-12-06 14:43:01.517303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:54.798 [2024-12-06 14:43:01.517329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.798 [2024-12-06 14:43:01.517339] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.798 [2024-12-06 14:43:01.517351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.798 [2024-12-06 14:43:01.517371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.798 [2024-12-06 14:43:01.517382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.798 14:43:01 -- host/timeout.sh@101 -- # sleep 3 00:28:55.739 [2024-12-06 14:43:02.517492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.739 [2024-12-06 14:43:02.517583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.739 [2024-12-06 14:43:02.517599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5bdc0 with addr=10.0.0.2, port=4420 00:28:55.739 [2024-12-06 14:43:02.517611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:55.739 [2024-12-06 14:43:02.517633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:55.739 [2024-12-06 14:43:02.517649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.739 [2024-12-06 14:43:02.517665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.739 [2024-12-06 14:43:02.517693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.739 [2024-12-06 14:43:02.517718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.739 [2024-12-06 14:43:02.517730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.673 [2024-12-06 14:43:03.517827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.673 [2024-12-06 14:43:03.517915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.673 [2024-12-06 14:43:03.517934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5bdc0 with addr=10.0.0.2, port=4420 00:28:56.673 [2024-12-06 14:43:03.517947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:56.673 [2024-12-06 14:43:03.517969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:56.673 [2024-12-06 14:43:03.517988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.673 [2024-12-06 14:43:03.518013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.673 [2024-12-06 14:43:03.518038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.673 [2024-12-06 14:43:03.518077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.673 [2024-12-06 14:43:03.518087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-12-06 14:43:04.519831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-06 14:43:04.519952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-06 14:43:04.519972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5bdc0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-12-06 14:43:04.519987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5bdc0 is same with the state(5) to be set 00:28:57.608 [2024-12-06 14:43:04.520148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bdc0 (9): Bad file descriptor 00:28:57.608 [2024-12-06 14:43:04.520623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-12-06 14:43:04.520649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-12-06 14:43:04.520661] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-12-06 14:43:04.523322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-12-06 14:43:04.523366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 14:43:04 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.866 [2024-12-06 14:43:04.781235] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.866 14:43:04 -- host/timeout.sh@103 -- # wait 90665 00:28:58.802 [2024-12-06 14:43:05.550004] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:04.068 00:29:04.068 Latency(us) 00:29:04.068 [2024-12-06T14:43:11.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.068 [2024-12-06T14:43:11.038Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:04.068 Verification LBA range: start 0x0 length 0x4000 00:29:04.068 NVMe0n1 : 10.01 7260.53 28.36 6333.31 0.00 9396.75 997.93 3019898.88 00:29:04.068 [2024-12-06T14:43:11.038Z] =================================================================================================================== 00:29:04.068 [2024-12-06T14:43:11.038Z] Total : 7260.53 28.36 6333.31 0.00 9396.75 0.00 3019898.88 00:29:04.068 0 00:29:04.068 14:43:10 -- host/timeout.sh@105 -- # killprocess 90502 00:29:04.068 14:43:10 -- common/autotest_common.sh@936 -- # '[' -z 90502 ']' 00:29:04.068 14:43:10 -- common/autotest_common.sh@940 -- # kill -0 90502 00:29:04.068 14:43:10 -- common/autotest_common.sh@941 -- # uname 00:29:04.068 14:43:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:04.068 14:43:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90502 00:29:04.068 killing process with pid 90502 00:29:04.068 Received shutdown signal, test time was about 10.000000 seconds 00:29:04.068 00:29:04.068 Latency(us) 00:29:04.068 [2024-12-06T14:43:11.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.068 [2024-12-06T14:43:11.038Z] =================================================================================================================== 00:29:04.068 [2024-12-06T14:43:11.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.068 14:43:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:04.068 14:43:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:04.068 14:43:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90502' 00:29:04.068 14:43:10 -- common/autotest_common.sh@955 -- # kill 90502 00:29:04.068 14:43:10 -- common/autotest_common.sh@960 -- # wait 90502 00:29:04.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.068 14:43:10 -- host/timeout.sh@110 -- # bdevperf_pid=90793 00:29:04.068 14:43:10 -- host/timeout.sh@112 -- # waitforlisten 90793 /var/tmp/bdevperf.sock 00:29:04.068 14:43:10 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:04.068 14:43:10 -- common/autotest_common.sh@829 -- # '[' -z 90793 ']' 00:29:04.068 14:43:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.068 14:43:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.068 14:43:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.068 14:43:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.068 14:43:10 -- common/autotest_common.sh@10 -- # set +x 00:29:04.068 [2024-12-06 14:43:10.961028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:04.068 [2024-12-06 14:43:10.961141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90793 ] 00:29:04.355 [2024-12-06 14:43:11.099664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.355 [2024-12-06 14:43:11.218061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.287 14:43:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.287 14:43:11 -- common/autotest_common.sh@862 -- # return 0 00:29:05.287 14:43:11 -- host/timeout.sh@116 -- # dtrace_pid=90820 00:29:05.287 14:43:11 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90793 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:05.287 14:43:11 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:05.287 14:43:12 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:05.544 NVMe0n1 00:29:05.544 14:43:12 -- host/timeout.sh@124 -- # rpc_pid=90869 00:29:05.544 14:43:12 -- host/timeout.sh@125 -- # sleep 1 00:29:05.544 14:43:12 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:05.801 Running I/O for 10 seconds... 00:29:06.734 14:43:13 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.997 [2024-12-06 14:43:13.746842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.746994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.997 [2024-12-06 14:43:13.747247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.747708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d400 is same with the state(5) to be set 00:29:06.998 [2024-12-06 14:43:13.748122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-12-06 14:43:13.748184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-12-06 14:43:13.748206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-12-06 14:43:13.748235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-12-06 14:43:13.748253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-12-06 14:43:13.748291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-12-06 14:43:13.748578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-12-06 14:43:13.748589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.748987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.748997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.749980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.749996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.750861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.750990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.751008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.751018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.751286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.751510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-12-06 14:43:13.751530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-12-06 14:43:13.751543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.751552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.751562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.751571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.751582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.751590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.751610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.751620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.751744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.751757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.751891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.752787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.752796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.753977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.753986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.000 [2024-12-06 14:43:13.754851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.000 [2024-12-06 14:43:13.754861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.754869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.755956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.755964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.756393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.756631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.756668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.756688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.756705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.756969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.756981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.757006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.757017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.757262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.757309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.757319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.757329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.757787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.757816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.757828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.758201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.758220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.758385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.758544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.758703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.758841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.758849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.759088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.759111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.759123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.759133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.759144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.759152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.759441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.759454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.001 [2024-12-06 14:43:13.759464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.001 [2024-12-06 14:43:13.759480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.759983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.759991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.760771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.002 [2024-12-06 14:43:13.760781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.761040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaf050 is same with the state(5) to be set 00:29:07.002 [2024-12-06 14:43:13.761056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:07.002 [2024-12-06 14:43:13.761064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:07.002 [2024-12-06 14:43:13.761072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71592 len:8 PRP1 0x0 PRP2 0x0 00:29:07.002 [2024-12-06 14:43:13.761081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.761553] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1aaf050 was disconnected and freed. reset controller. 00:29:07.002 [2024-12-06 14:43:13.761867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.002 [2024-12-06 14:43:13.761911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.761923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.002 [2024-12-06 14:43:13.761933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.761944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.002 [2024-12-06 14:43:13.761953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.761963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.002 [2024-12-06 14:43:13.761972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.002 [2024-12-06 14:43:13.761981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a39dc0 is same with the state(5) to be set 00:29:07.002 [2024-12-06 14:43:13.762549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.002 [2024-12-06 14:43:13.762582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a39dc0 (9): Bad file descriptor 00:29:07.002 [2024-12-06 14:43:13.762864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-12-06 14:43:13.762922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-12-06 14:43:13.762939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a39dc0 with addr=10.0.0.2, port=4420 00:29:07.003 [2024-12-06 14:43:13.762949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a39dc0 is same with the state(5) to be set 00:29:07.003 [2024-12-06 14:43:13.763158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a39dc0 (9): Bad file descriptor 00:29:07.003 [2024-12-06 14:43:13.763187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.003 [2024-12-06 14:43:13.763197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.003 [2024-12-06 14:43:13.763208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.003 [2024-12-06 14:43:13.763228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.003 [2024-12-06 14:43:13.763239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.003 14:43:13 -- host/timeout.sh@128 -- # wait 90869 00:29:08.905 [2024-12-06 14:43:15.763447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.905 [2024-12-06 14:43:15.763581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.905 [2024-12-06 14:43:15.763604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a39dc0 with addr=10.0.0.2, port=4420 00:29:08.905 [2024-12-06 14:43:15.763619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a39dc0 is same with the state(5) to be set 00:29:08.905 [2024-12-06 14:43:15.763649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a39dc0 (9): Bad file descriptor 00:29:08.905 [2024-12-06 14:43:15.763686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.905 [2024-12-06 14:43:15.763701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.905 [2024-12-06 14:43:15.763712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.905 [2024-12-06 14:43:15.763742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.905 [2024-12-06 14:43:15.763757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.806 [2024-12-06 14:43:17.763937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.806 [2024-12-06 14:43:17.764069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.806 [2024-12-06 14:43:17.764091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a39dc0 with addr=10.0.0.2, port=4420 00:29:10.806 [2024-12-06 14:43:17.764106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a39dc0 is same with the state(5) to be set 00:29:10.806 [2024-12-06 14:43:17.764134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a39dc0 (9): Bad file descriptor 00:29:10.806 [2024-12-06 14:43:17.764158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.806 [2024-12-06 14:43:17.764168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.806 [2024-12-06 14:43:17.764180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.806 [2024-12-06 14:43:17.764210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.806 [2024-12-06 14:43:17.764224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.797 [2024-12-06 14:43:19.764336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.797 [2024-12-06 14:43:19.764459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.797 [2024-12-06 14:43:19.764476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.797 [2024-12-06 14:43:19.764487] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:12.797 [2024-12-06 14:43:19.764523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.175 00:29:14.175 Latency(us) 00:29:14.175 [2024-12-06T14:43:21.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.175 [2024-12-06T14:43:21.145Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:14.175 NVMe0n1 : 8.15 2856.18 11.16 15.71 0.00 44608.54 2636.33 7046430.72 00:29:14.175 [2024-12-06T14:43:21.145Z] =================================================================================================================== 00:29:14.175 [2024-12-06T14:43:21.145Z] Total : 2856.18 11.16 15.71 0.00 44608.54 2636.33 7046430.72 00:29:14.175 0 00:29:14.175 14:43:20 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:14.175 Attaching 5 probes... 00:29:14.175 1318.408838: reset bdev controller NVMe0 00:29:14.175 1318.487317: reconnect bdev controller NVMe0 00:29:14.175 3319.145766: reconnect delay bdev controller NVMe0 00:29:14.175 3319.180725: reconnect bdev controller NVMe0 00:29:14.175 5319.657971: reconnect delay bdev controller NVMe0 00:29:14.175 5319.689550: reconnect bdev controller NVMe0 00:29:14.175 7320.159092: reconnect delay bdev controller NVMe0 00:29:14.175 7320.187984: reconnect bdev controller NVMe0 00:29:14.175 14:43:20 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:14.175 14:43:20 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:14.175 14:43:20 -- host/timeout.sh@136 -- # kill 90820 00:29:14.175 14:43:20 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:14.175 14:43:20 -- host/timeout.sh@139 -- # killprocess 90793 00:29:14.175 14:43:20 -- common/autotest_common.sh@936 -- # '[' -z 90793 ']' 00:29:14.175 14:43:20 -- common/autotest_common.sh@940 -- # kill -0 90793 00:29:14.175 14:43:20 -- common/autotest_common.sh@941 -- # uname 00:29:14.175 14:43:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:14.175 14:43:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90793 00:29:14.175 14:43:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:14.175 killing process with pid 90793 00:29:14.175 14:43:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:14.175 14:43:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90793' 00:29:14.175 14:43:20 -- common/autotest_common.sh@955 -- # kill 90793 00:29:14.176 14:43:20 -- common/autotest_common.sh@960 -- # wait 90793 00:29:14.176 Received shutdown signal, test time was about 8.213208 seconds 00:29:14.176 00:29:14.176 Latency(us) 00:29:14.176 [2024-12-06T14:43:21.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.176 [2024-12-06T14:43:21.146Z] =================================================================================================================== 00:29:14.176 [2024-12-06T14:43:21.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.434 14:43:21 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.694 14:43:21 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:14.694 14:43:21 -- host/timeout.sh@145 -- # nvmftestfini 00:29:14.694 14:43:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:14.694 14:43:21 -- nvmf/common.sh@116 -- # sync 00:29:14.694 14:43:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:14.694 14:43:21 -- nvmf/common.sh@119 -- # set +e 00:29:14.694 14:43:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:14.694 14:43:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:14.694 rmmod nvme_tcp 00:29:14.694 rmmod nvme_fabrics 00:29:14.694 rmmod nvme_keyring 00:29:14.694 14:43:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:14.694 14:43:21 -- nvmf/common.sh@123 -- # set -e 00:29:14.694 14:43:21 -- nvmf/common.sh@124 -- # return 0 00:29:14.694 14:43:21 -- nvmf/common.sh@477 -- # '[' -n 90203 ']' 00:29:14.694 14:43:21 -- nvmf/common.sh@478 -- # killprocess 90203 00:29:14.694 14:43:21 -- common/autotest_common.sh@936 -- # '[' -z 90203 ']' 00:29:14.694 14:43:21 -- common/autotest_common.sh@940 -- # kill -0 90203 00:29:14.694 14:43:21 -- common/autotest_common.sh@941 -- # uname 00:29:14.694 14:43:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:14.694 14:43:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90203 00:29:14.694 14:43:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:14.694 killing process with pid 90203 00:29:14.694 14:43:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:14.694 14:43:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90203' 00:29:14.694 14:43:21 -- common/autotest_common.sh@955 -- # kill 90203 00:29:14.694 14:43:21 -- common/autotest_common.sh@960 -- # wait 90203 00:29:15.263 14:43:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:15.263 14:43:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:15.263 14:43:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:15.263 14:43:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:15.263 14:43:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:15.263 14:43:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.263 14:43:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.263 14:43:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.263 14:43:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:15.263 00:29:15.263 real 0m48.173s 00:29:15.263 user 2m20.278s 00:29:15.263 sys 0m5.764s 00:29:15.263 14:43:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.263 ************************************ 00:29:15.263 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.263 END TEST nvmf_timeout 00:29:15.263 ************************************ 00:29:15.263 14:43:22 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:29:15.263 14:43:22 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:15.263 14:43:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.263 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.263 14:43:22 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:15.263 00:29:15.263 real 18m57.242s 00:29:15.263 user 60m40.010s 00:29:15.263 sys 3m53.999s 00:29:15.263 14:43:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.263 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.263 ************************************ 00:29:15.263 END TEST nvmf_tcp 00:29:15.263 ************************************ 00:29:15.263 14:43:22 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:29:15.263 14:43:22 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:15.263 14:43:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:15.263 14:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.263 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.263 ************************************ 00:29:15.263 START TEST spdkcli_nvmf_tcp 00:29:15.263 ************************************ 00:29:15.263 14:43:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:15.263 * Looking for test storage... 00:29:15.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:15.263 14:43:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:15.263 14:43:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:15.263 14:43:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:15.522 14:43:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:15.522 14:43:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:15.522 14:43:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:15.522 14:43:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:15.522 14:43:22 -- scripts/common.sh@335 -- # IFS=.-: 00:29:15.522 14:43:22 -- scripts/common.sh@335 -- # read -ra ver1 00:29:15.522 14:43:22 -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.522 14:43:22 -- scripts/common.sh@336 -- # read -ra ver2 00:29:15.522 14:43:22 -- scripts/common.sh@337 -- # local 'op=<' 00:29:15.522 14:43:22 -- scripts/common.sh@339 -- # ver1_l=2 00:29:15.522 14:43:22 -- scripts/common.sh@340 -- # ver2_l=1 00:29:15.522 14:43:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:15.522 14:43:22 -- scripts/common.sh@343 -- # case "$op" in 00:29:15.522 14:43:22 -- scripts/common.sh@344 -- # : 1 00:29:15.522 14:43:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:15.522 14:43:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.522 14:43:22 -- scripts/common.sh@364 -- # decimal 1 00:29:15.522 14:43:22 -- scripts/common.sh@352 -- # local d=1 00:29:15.522 14:43:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.522 14:43:22 -- scripts/common.sh@354 -- # echo 1 00:29:15.522 14:43:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:15.522 14:43:22 -- scripts/common.sh@365 -- # decimal 2 00:29:15.522 14:43:22 -- scripts/common.sh@352 -- # local d=2 00:29:15.523 14:43:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.523 14:43:22 -- scripts/common.sh@354 -- # echo 2 00:29:15.523 14:43:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:15.523 14:43:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:15.523 14:43:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:15.523 14:43:22 -- scripts/common.sh@367 -- # return 0 00:29:15.523 14:43:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.523 14:43:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:15.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.523 --rc genhtml_branch_coverage=1 00:29:15.523 --rc genhtml_function_coverage=1 00:29:15.523 --rc genhtml_legend=1 00:29:15.523 --rc geninfo_all_blocks=1 00:29:15.523 --rc geninfo_unexecuted_blocks=1 00:29:15.523 00:29:15.523 ' 00:29:15.523 14:43:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:15.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.523 --rc genhtml_branch_coverage=1 00:29:15.523 --rc genhtml_function_coverage=1 00:29:15.523 --rc genhtml_legend=1 00:29:15.523 --rc geninfo_all_blocks=1 00:29:15.523 --rc geninfo_unexecuted_blocks=1 00:29:15.523 00:29:15.523 ' 00:29:15.523 14:43:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:15.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.523 --rc genhtml_branch_coverage=1 00:29:15.523 --rc genhtml_function_coverage=1 00:29:15.523 --rc genhtml_legend=1 00:29:15.523 --rc geninfo_all_blocks=1 00:29:15.523 --rc geninfo_unexecuted_blocks=1 00:29:15.523 00:29:15.523 ' 00:29:15.523 14:43:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:15.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.523 --rc genhtml_branch_coverage=1 00:29:15.523 --rc genhtml_function_coverage=1 00:29:15.523 --rc genhtml_legend=1 00:29:15.523 --rc geninfo_all_blocks=1 00:29:15.523 --rc geninfo_unexecuted_blocks=1 00:29:15.523 00:29:15.523 ' 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:15.523 14:43:22 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:15.523 14:43:22 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:15.523 14:43:22 -- nvmf/common.sh@7 -- # uname -s 00:29:15.523 14:43:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.523 14:43:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.523 14:43:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.523 14:43:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.523 14:43:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.523 14:43:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.523 14:43:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.523 14:43:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.523 14:43:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.523 14:43:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.523 14:43:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:29:15.523 14:43:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:29:15.523 14:43:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.523 14:43:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.523 14:43:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:15.523 14:43:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:15.523 14:43:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.523 14:43:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.523 14:43:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.523 14:43:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.523 14:43:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.523 14:43:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.523 14:43:22 -- paths/export.sh@5 -- # export PATH 00:29:15.523 14:43:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.523 14:43:22 -- nvmf/common.sh@46 -- # : 0 00:29:15.523 14:43:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:15.523 14:43:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:15.523 14:43:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:15.523 14:43:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.523 14:43:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.523 14:43:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:15.523 14:43:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:15.523 14:43:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:15.523 14:43:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.523 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.523 14:43:22 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:15.523 14:43:22 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=91100 00:29:15.523 14:43:22 -- spdkcli/common.sh@34 -- # waitforlisten 91100 00:29:15.523 14:43:22 -- common/autotest_common.sh@829 -- # '[' -z 91100 ']' 00:29:15.523 14:43:22 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:15.523 14:43:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.523 14:43:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.523 14:43:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.523 14:43:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.523 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.523 [2024-12-06 14:43:22.404970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:15.523 [2024-12-06 14:43:22.405090] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91100 ] 00:29:15.782 [2024-12-06 14:43:22.541639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.782 [2024-12-06 14:43:22.667199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:15.782 [2024-12-06 14:43:22.667482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.782 [2024-12-06 14:43:22.667653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.718 14:43:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.718 14:43:23 -- common/autotest_common.sh@862 -- # return 0 00:29:16.718 14:43:23 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:16.718 14:43:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:16.718 14:43:23 -- common/autotest_common.sh@10 -- # set +x 00:29:16.718 14:43:23 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:16.718 14:43:23 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:16.718 14:43:23 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:16.718 14:43:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:16.718 14:43:23 -- common/autotest_common.sh@10 -- # set +x 00:29:16.718 14:43:23 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:16.718 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:16.718 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:16.718 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:16.718 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:16.718 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:16.718 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:16.718 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:16.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:16.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:16.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:16.719 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:16.719 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:16.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:16.719 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:16.719 ' 00:29:17.296 [2024-12-06 14:43:23.980237] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:19.824 [2024-12-06 14:43:26.240155] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.758 [2024-12-06 14:43:27.534164] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:23.288 [2024-12-06 14:43:29.924361] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:25.184 [2024-12-06 14:43:31.985874] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:27.094 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:27.094 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:27.094 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:27.094 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:27.094 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:27.094 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:27.094 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:27.094 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:27.094 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:27.094 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:27.094 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:27.094 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:27.094 14:43:33 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:27.094 14:43:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.094 14:43:33 -- common/autotest_common.sh@10 -- # set +x 00:29:27.094 14:43:33 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:27.094 14:43:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.094 14:43:33 -- common/autotest_common.sh@10 -- # set +x 00:29:27.094 14:43:33 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:27.094 14:43:33 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:27.369 14:43:34 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:27.369 14:43:34 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:27.369 14:43:34 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:27.369 14:43:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.369 14:43:34 -- common/autotest_common.sh@10 -- # set +x 00:29:27.369 14:43:34 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:27.369 14:43:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.369 14:43:34 -- common/autotest_common.sh@10 -- # set +x 00:29:27.369 14:43:34 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:27.369 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:27.369 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:27.369 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:27.369 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:27.369 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:27.369 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:27.369 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:27.369 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:27.369 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:27.369 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:27.369 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:27.369 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:27.369 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:27.369 ' 00:29:32.639 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:32.639 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:32.639 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:32.639 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:32.639 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:32.639 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:32.639 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:32.639 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:32.639 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:32.639 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:32.639 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:32.639 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:32.639 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:32.639 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:32.899 14:43:39 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:32.899 14:43:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.899 14:43:39 -- common/autotest_common.sh@10 -- # set +x 00:29:32.899 14:43:39 -- spdkcli/nvmf.sh@90 -- # killprocess 91100 00:29:32.899 14:43:39 -- common/autotest_common.sh@936 -- # '[' -z 91100 ']' 00:29:32.899 14:43:39 -- common/autotest_common.sh@940 -- # kill -0 91100 00:29:32.899 14:43:39 -- common/autotest_common.sh@941 -- # uname 00:29:32.899 14:43:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:32.899 14:43:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91100 00:29:32.899 killing process with pid 91100 00:29:32.899 14:43:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:32.899 14:43:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:32.899 14:43:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91100' 00:29:32.899 14:43:39 -- common/autotest_common.sh@955 -- # kill 91100 00:29:32.899 [2024-12-06 14:43:39.796840] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:32.899 14:43:39 -- common/autotest_common.sh@960 -- # wait 91100 00:29:33.468 14:43:40 -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:33.468 14:43:40 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:33.468 14:43:40 -- spdkcli/common.sh@13 -- # '[' -n 91100 ']' 00:29:33.468 14:43:40 -- spdkcli/common.sh@14 -- # killprocess 91100 00:29:33.468 14:43:40 -- common/autotest_common.sh@936 -- # '[' -z 91100 ']' 00:29:33.468 14:43:40 -- common/autotest_common.sh@940 -- # kill -0 91100 00:29:33.468 Process with pid 91100 is not found 00:29:33.468 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91100) - No such process 00:29:33.468 14:43:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91100 is not found' 00:29:33.468 14:43:40 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:33.468 14:43:40 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:33.468 14:43:40 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:33.468 ************************************ 00:29:33.468 END TEST spdkcli_nvmf_tcp 00:29:33.468 ************************************ 00:29:33.468 00:29:33.468 real 0m18.027s 00:29:33.468 user 0m38.843s 00:29:33.468 sys 0m1.027s 00:29:33.468 14:43:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:33.468 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:29:33.468 14:43:40 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:33.468 14:43:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:33.468 14:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:33.468 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:29:33.468 ************************************ 00:29:33.468 START TEST nvmf_identify_passthru 00:29:33.468 ************************************ 00:29:33.468 14:43:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:33.468 * Looking for test storage... 00:29:33.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:33.468 14:43:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:33.468 14:43:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:33.468 14:43:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:33.468 14:43:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:33.468 14:43:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:33.468 14:43:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:33.468 14:43:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:33.468 14:43:40 -- scripts/common.sh@335 -- # IFS=.-: 00:29:33.468 14:43:40 -- scripts/common.sh@335 -- # read -ra ver1 00:29:33.468 14:43:40 -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.468 14:43:40 -- scripts/common.sh@336 -- # read -ra ver2 00:29:33.468 14:43:40 -- scripts/common.sh@337 -- # local 'op=<' 00:29:33.468 14:43:40 -- scripts/common.sh@339 -- # ver1_l=2 00:29:33.468 14:43:40 -- scripts/common.sh@340 -- # ver2_l=1 00:29:33.468 14:43:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:33.468 14:43:40 -- scripts/common.sh@343 -- # case "$op" in 00:29:33.468 14:43:40 -- scripts/common.sh@344 -- # : 1 00:29:33.468 14:43:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:33.468 14:43:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.468 14:43:40 -- scripts/common.sh@364 -- # decimal 1 00:29:33.468 14:43:40 -- scripts/common.sh@352 -- # local d=1 00:29:33.468 14:43:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.468 14:43:40 -- scripts/common.sh@354 -- # echo 1 00:29:33.468 14:43:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:33.468 14:43:40 -- scripts/common.sh@365 -- # decimal 2 00:29:33.468 14:43:40 -- scripts/common.sh@352 -- # local d=2 00:29:33.468 14:43:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.468 14:43:40 -- scripts/common.sh@354 -- # echo 2 00:29:33.468 14:43:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:33.468 14:43:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:33.468 14:43:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:33.468 14:43:40 -- scripts/common.sh@367 -- # return 0 00:29:33.468 14:43:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.468 14:43:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.468 --rc genhtml_branch_coverage=1 00:29:33.468 --rc genhtml_function_coverage=1 00:29:33.468 --rc genhtml_legend=1 00:29:33.468 --rc geninfo_all_blocks=1 00:29:33.468 --rc geninfo_unexecuted_blocks=1 00:29:33.468 00:29:33.468 ' 00:29:33.468 14:43:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.468 --rc genhtml_branch_coverage=1 00:29:33.468 --rc genhtml_function_coverage=1 00:29:33.468 --rc genhtml_legend=1 00:29:33.468 --rc geninfo_all_blocks=1 00:29:33.468 --rc geninfo_unexecuted_blocks=1 00:29:33.468 00:29:33.468 ' 00:29:33.468 14:43:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.468 --rc genhtml_branch_coverage=1 00:29:33.468 --rc genhtml_function_coverage=1 00:29:33.468 --rc genhtml_legend=1 00:29:33.468 --rc geninfo_all_blocks=1 00:29:33.468 --rc geninfo_unexecuted_blocks=1 00:29:33.468 00:29:33.468 ' 00:29:33.468 14:43:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.468 --rc genhtml_branch_coverage=1 00:29:33.468 --rc genhtml_function_coverage=1 00:29:33.468 --rc genhtml_legend=1 00:29:33.468 --rc geninfo_all_blocks=1 00:29:33.468 --rc geninfo_unexecuted_blocks=1 00:29:33.468 00:29:33.468 ' 00:29:33.468 14:43:40 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:33.468 14:43:40 -- nvmf/common.sh@7 -- # uname -s 00:29:33.468 14:43:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.468 14:43:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.468 14:43:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.468 14:43:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.468 14:43:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.468 14:43:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.468 14:43:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.468 14:43:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.468 14:43:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.468 14:43:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.468 14:43:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:29:33.468 14:43:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:29:33.468 14:43:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.468 14:43:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.468 14:43:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:33.468 14:43:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:33.468 14:43:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.468 14:43:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.468 14:43:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.468 14:43:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.468 14:43:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.468 14:43:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.468 14:43:40 -- paths/export.sh@5 -- # export PATH 00:29:33.468 14:43:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.468 14:43:40 -- nvmf/common.sh@46 -- # : 0 00:29:33.468 14:43:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:33.468 14:43:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:33.468 14:43:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:33.468 14:43:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.468 14:43:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.468 14:43:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:33.468 14:43:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:33.468 14:43:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:33.728 14:43:40 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:33.728 14:43:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.728 14:43:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.728 14:43:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.728 14:43:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.728 14:43:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.728 14:43:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.728 14:43:40 -- paths/export.sh@5 -- # export PATH 00:29:33.728 14:43:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.728 14:43:40 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:33.728 14:43:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:33.728 14:43:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.728 14:43:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:33.728 14:43:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:33.728 14:43:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:33.728 14:43:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.728 14:43:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:33.728 14:43:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.728 14:43:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:33.728 14:43:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:33.728 14:43:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:33.728 14:43:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:33.728 14:43:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:33.728 14:43:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:33.728 14:43:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.728 14:43:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.728 14:43:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:33.728 14:43:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:33.728 14:43:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:33.728 14:43:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:33.728 14:43:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:33.728 14:43:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.728 14:43:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:33.728 14:43:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:33.728 14:43:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:33.728 14:43:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:33.728 14:43:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:33.728 14:43:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:33.728 Cannot find device "nvmf_tgt_br" 00:29:33.728 14:43:40 -- nvmf/common.sh@154 -- # true 00:29:33.728 14:43:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:33.728 Cannot find device "nvmf_tgt_br2" 00:29:33.728 14:43:40 -- nvmf/common.sh@155 -- # true 00:29:33.728 14:43:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:33.728 14:43:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:33.728 Cannot find device "nvmf_tgt_br" 00:29:33.728 14:43:40 -- nvmf/common.sh@157 -- # true 00:29:33.728 14:43:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:33.728 Cannot find device "nvmf_tgt_br2" 00:29:33.728 14:43:40 -- nvmf/common.sh@158 -- # true 00:29:33.728 14:43:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:33.728 14:43:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:33.728 14:43:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:33.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:33.728 14:43:40 -- nvmf/common.sh@161 -- # true 00:29:33.728 14:43:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:33.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:33.728 14:43:40 -- nvmf/common.sh@162 -- # true 00:29:33.728 14:43:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:33.728 14:43:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:33.728 14:43:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:33.728 14:43:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:33.728 14:43:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:33.728 14:43:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:33.728 14:43:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:33.728 14:43:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:33.728 14:43:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:33.728 14:43:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:33.728 14:43:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:33.728 14:43:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:33.728 14:43:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:33.728 14:43:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:33.988 14:43:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:33.988 14:43:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:33.988 14:43:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:33.988 14:43:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:33.988 14:43:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:33.988 14:43:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:33.988 14:43:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:33.988 14:43:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:33.988 14:43:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:33.988 14:43:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:33.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:29:33.988 00:29:33.988 --- 10.0.0.2 ping statistics --- 00:29:33.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.988 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:33.988 14:43:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:33.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:33.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:29:33.988 00:29:33.988 --- 10.0.0.3 ping statistics --- 00:29:33.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.988 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:29:33.988 14:43:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:33.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:29:33.988 00:29:33.988 --- 10.0.0.1 ping statistics --- 00:29:33.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.988 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:29:33.988 14:43:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.988 14:43:40 -- nvmf/common.sh@421 -- # return 0 00:29:33.988 14:43:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:33.988 14:43:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.988 14:43:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:33.988 14:43:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:33.988 14:43:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.988 14:43:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:33.988 14:43:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:33.988 14:43:40 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:33.988 14:43:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.988 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:29:33.988 14:43:40 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:33.988 14:43:40 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:33.988 14:43:40 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:33.988 14:43:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:33.988 14:43:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:33.988 14:43:40 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:33.988 14:43:40 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:33.988 14:43:40 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:33.988 14:43:40 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:33.988 14:43:40 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:33.988 14:43:40 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:29:33.988 14:43:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:29:33.988 14:43:40 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:33.988 14:43:40 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:29:33.988 14:43:40 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:29:33.988 14:43:40 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:29:33.988 14:43:40 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:33.988 14:43:40 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:34.247 14:43:41 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:34.247 14:43:41 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:29:34.247 14:43:41 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:34.247 14:43:41 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:34.506 14:43:41 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:34.506 14:43:41 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:34.506 14:43:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.506 14:43:41 -- common/autotest_common.sh@10 -- # set +x 00:29:34.506 14:43:41 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:34.506 14:43:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.506 14:43:41 -- common/autotest_common.sh@10 -- # set +x 00:29:34.506 14:43:41 -- target/identify_passthru.sh@31 -- # nvmfpid=91605 00:29:34.506 14:43:41 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:34.506 14:43:41 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.506 14:43:41 -- target/identify_passthru.sh@35 -- # waitforlisten 91605 00:29:34.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.506 14:43:41 -- common/autotest_common.sh@829 -- # '[' -z 91605 ']' 00:29:34.506 14:43:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.506 14:43:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.506 14:43:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.506 14:43:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.506 14:43:41 -- common/autotest_common.sh@10 -- # set +x 00:29:34.506 [2024-12-06 14:43:41.366783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:34.506 [2024-12-06 14:43:41.366892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.766 [2024-12-06 14:43:41.507632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.766 [2024-12-06 14:43:41.659192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:34.766 [2024-12-06 14:43:41.659707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.766 [2024-12-06 14:43:41.659735] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.766 [2024-12-06 14:43:41.659748] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.766 [2024-12-06 14:43:41.659873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.766 [2024-12-06 14:43:41.660161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.766 [2024-12-06 14:43:41.660286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.766 [2024-12-06 14:43:41.660463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.703 14:43:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.703 14:43:42 -- common/autotest_common.sh@862 -- # return 0 00:29:35.703 14:43:42 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:35.703 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.703 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.703 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.704 14:43:42 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:35.704 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.704 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.704 [2024-12-06 14:43:42.555714] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:35.704 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.704 14:43:42 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.704 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.704 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.704 [2024-12-06 14:43:42.570387] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.704 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.704 14:43:42 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:35.704 14:43:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.704 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.704 14:43:42 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:35.704 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.704 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.963 Nvme0n1 00:29:35.963 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.963 14:43:42 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:35.963 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.963 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.963 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.963 14:43:42 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:35.963 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.963 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.963 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.963 14:43:42 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.963 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.963 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.963 [2024-12-06 14:43:42.722217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.963 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.963 14:43:42 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:35.963 14:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.963 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:29:35.963 [2024-12-06 14:43:42.729899] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:35.963 [ 00:29:35.963 { 00:29:35.963 "allow_any_host": true, 00:29:35.963 "hosts": [], 00:29:35.963 "listen_addresses": [], 00:29:35.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:35.963 "subtype": "Discovery" 00:29:35.963 }, 00:29:35.963 { 00:29:35.963 "allow_any_host": true, 00:29:35.963 "hosts": [], 00:29:35.964 "listen_addresses": [ 00:29:35.964 { 00:29:35.964 "adrfam": "IPv4", 00:29:35.964 "traddr": "10.0.0.2", 00:29:35.964 "transport": "TCP", 00:29:35.964 "trsvcid": "4420", 00:29:35.964 "trtype": "TCP" 00:29:35.964 } 00:29:35.964 ], 00:29:35.964 "max_cntlid": 65519, 00:29:35.964 "max_namespaces": 1, 00:29:35.964 "min_cntlid": 1, 00:29:35.964 "model_number": "SPDK bdev Controller", 00:29:35.964 "namespaces": [ 00:29:35.964 { 00:29:35.964 "bdev_name": "Nvme0n1", 00:29:35.964 "name": "Nvme0n1", 00:29:35.964 "nguid": "A67C70FE2B474C509E46B32BEDC3B461", 00:29:35.964 "nsid": 1, 00:29:35.964 "uuid": "a67c70fe-2b47-4c50-9e46-b32bedc3b461" 00:29:35.964 } 00:29:35.964 ], 00:29:35.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.964 "serial_number": "SPDK00000000000001", 00:29:35.964 "subtype": "NVMe" 00:29:35.964 } 00:29:35.964 ] 00:29:35.964 14:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.964 14:43:42 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:35.964 14:43:42 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:35.964 14:43:42 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:36.223 14:43:42 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:36.223 14:43:42 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:36.223 14:43:42 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:36.223 14:43:42 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:36.223 14:43:43 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:36.223 14:43:43 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:36.223 14:43:43 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:36.223 14:43:43 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.223 14:43:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.223 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:29:36.223 14:43:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.223 14:43:43 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:36.223 14:43:43 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:36.223 14:43:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:36.223 14:43:43 -- nvmf/common.sh@116 -- # sync 00:29:36.483 14:43:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:36.483 14:43:43 -- nvmf/common.sh@119 -- # set +e 00:29:36.483 14:43:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:36.483 14:43:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:36.483 rmmod nvme_tcp 00:29:36.483 rmmod nvme_fabrics 00:29:36.483 rmmod nvme_keyring 00:29:36.483 14:43:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:36.483 14:43:43 -- nvmf/common.sh@123 -- # set -e 00:29:36.483 14:43:43 -- nvmf/common.sh@124 -- # return 0 00:29:36.483 14:43:43 -- nvmf/common.sh@477 -- # '[' -n 91605 ']' 00:29:36.483 14:43:43 -- nvmf/common.sh@478 -- # killprocess 91605 00:29:36.483 14:43:43 -- common/autotest_common.sh@936 -- # '[' -z 91605 ']' 00:29:36.483 14:43:43 -- common/autotest_common.sh@940 -- # kill -0 91605 00:29:36.483 14:43:43 -- common/autotest_common.sh@941 -- # uname 00:29:36.483 14:43:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:36.483 14:43:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91605 00:29:36.483 14:43:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:36.483 14:43:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:36.483 killing process with pid 91605 00:29:36.483 14:43:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91605' 00:29:36.483 14:43:43 -- common/autotest_common.sh@955 -- # kill 91605 00:29:36.483 [2024-12-06 14:43:43.342680] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:36.483 14:43:43 -- common/autotest_common.sh@960 -- # wait 91605 00:29:36.743 14:43:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:36.743 14:43:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:36.743 14:43:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:36.743 14:43:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:36.743 14:43:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:36.743 14:43:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.743 14:43:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:36.743 14:43:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.743 14:43:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:36.743 00:29:36.743 real 0m3.484s 00:29:36.743 user 0m8.399s 00:29:36.743 sys 0m0.906s 00:29:36.743 14:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:36.743 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:29:36.743 ************************************ 00:29:36.743 END TEST nvmf_identify_passthru 00:29:36.743 ************************************ 00:29:37.002 14:43:43 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:37.002 14:43:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:37.002 14:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:37.003 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:29:37.003 ************************************ 00:29:37.003 START TEST nvmf_dif 00:29:37.003 ************************************ 00:29:37.003 14:43:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:37.003 * Looking for test storage... 00:29:37.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:37.003 14:43:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:37.003 14:43:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:37.003 14:43:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:37.003 14:43:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:37.003 14:43:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:37.003 14:43:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:37.003 14:43:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:37.003 14:43:43 -- scripts/common.sh@335 -- # IFS=.-: 00:29:37.003 14:43:43 -- scripts/common.sh@335 -- # read -ra ver1 00:29:37.003 14:43:43 -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.003 14:43:43 -- scripts/common.sh@336 -- # read -ra ver2 00:29:37.003 14:43:43 -- scripts/common.sh@337 -- # local 'op=<' 00:29:37.003 14:43:43 -- scripts/common.sh@339 -- # ver1_l=2 00:29:37.003 14:43:43 -- scripts/common.sh@340 -- # ver2_l=1 00:29:37.003 14:43:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:37.003 14:43:43 -- scripts/common.sh@343 -- # case "$op" in 00:29:37.003 14:43:43 -- scripts/common.sh@344 -- # : 1 00:29:37.003 14:43:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:37.003 14:43:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.003 14:43:43 -- scripts/common.sh@364 -- # decimal 1 00:29:37.003 14:43:43 -- scripts/common.sh@352 -- # local d=1 00:29:37.003 14:43:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.003 14:43:43 -- scripts/common.sh@354 -- # echo 1 00:29:37.003 14:43:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:37.003 14:43:43 -- scripts/common.sh@365 -- # decimal 2 00:29:37.003 14:43:43 -- scripts/common.sh@352 -- # local d=2 00:29:37.003 14:43:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.003 14:43:43 -- scripts/common.sh@354 -- # echo 2 00:29:37.003 14:43:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:37.003 14:43:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:37.003 14:43:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:37.003 14:43:43 -- scripts/common.sh@367 -- # return 0 00:29:37.003 14:43:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.003 14:43:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.003 --rc genhtml_branch_coverage=1 00:29:37.003 --rc genhtml_function_coverage=1 00:29:37.003 --rc genhtml_legend=1 00:29:37.003 --rc geninfo_all_blocks=1 00:29:37.003 --rc geninfo_unexecuted_blocks=1 00:29:37.003 00:29:37.003 ' 00:29:37.003 14:43:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.003 --rc genhtml_branch_coverage=1 00:29:37.003 --rc genhtml_function_coverage=1 00:29:37.003 --rc genhtml_legend=1 00:29:37.003 --rc geninfo_all_blocks=1 00:29:37.003 --rc geninfo_unexecuted_blocks=1 00:29:37.003 00:29:37.003 ' 00:29:37.003 14:43:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.003 --rc genhtml_branch_coverage=1 00:29:37.003 --rc genhtml_function_coverage=1 00:29:37.003 --rc genhtml_legend=1 00:29:37.003 --rc geninfo_all_blocks=1 00:29:37.003 --rc geninfo_unexecuted_blocks=1 00:29:37.003 00:29:37.003 ' 00:29:37.003 14:43:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.003 --rc genhtml_branch_coverage=1 00:29:37.003 --rc genhtml_function_coverage=1 00:29:37.003 --rc genhtml_legend=1 00:29:37.003 --rc geninfo_all_blocks=1 00:29:37.003 --rc geninfo_unexecuted_blocks=1 00:29:37.003 00:29:37.003 ' 00:29:37.003 14:43:43 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:37.003 14:43:43 -- nvmf/common.sh@7 -- # uname -s 00:29:37.003 14:43:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.003 14:43:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.003 14:43:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.003 14:43:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.003 14:43:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.003 14:43:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.003 14:43:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.003 14:43:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.003 14:43:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.003 14:43:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.003 14:43:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:29:37.003 14:43:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:29:37.003 14:43:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.003 14:43:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.003 14:43:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:37.003 14:43:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:37.003 14:43:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.003 14:43:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.003 14:43:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.003 14:43:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.003 14:43:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.003 14:43:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.003 14:43:43 -- paths/export.sh@5 -- # export PATH 00:29:37.004 14:43:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.004 14:43:43 -- nvmf/common.sh@46 -- # : 0 00:29:37.004 14:43:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:37.004 14:43:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:37.004 14:43:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:37.004 14:43:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.004 14:43:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.004 14:43:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:37.004 14:43:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:37.004 14:43:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:37.004 14:43:43 -- target/dif.sh@15 -- # NULL_META=16 00:29:37.004 14:43:43 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:37.004 14:43:43 -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:37.004 14:43:43 -- target/dif.sh@15 -- # NULL_DIF=1 00:29:37.004 14:43:43 -- target/dif.sh@135 -- # nvmftestinit 00:29:37.004 14:43:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:37.004 14:43:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.004 14:43:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:37.004 14:43:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:37.004 14:43:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:37.004 14:43:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.004 14:43:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:37.004 14:43:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.004 14:43:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:37.004 14:43:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:37.004 14:43:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:37.004 14:43:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:37.004 14:43:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:37.004 14:43:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:37.004 14:43:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.004 14:43:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.004 14:43:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:37.004 14:43:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:37.004 14:43:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:37.004 14:43:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:37.004 14:43:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:37.004 14:43:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.004 14:43:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:37.004 14:43:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:37.004 14:43:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:37.004 14:43:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:37.004 14:43:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:37.263 14:43:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:37.263 Cannot find device "nvmf_tgt_br" 00:29:37.263 14:43:43 -- nvmf/common.sh@154 -- # true 00:29:37.263 14:43:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:37.263 Cannot find device "nvmf_tgt_br2" 00:29:37.263 14:43:44 -- nvmf/common.sh@155 -- # true 00:29:37.263 14:43:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:37.263 14:43:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:37.263 Cannot find device "nvmf_tgt_br" 00:29:37.263 14:43:44 -- nvmf/common.sh@157 -- # true 00:29:37.263 14:43:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:37.263 Cannot find device "nvmf_tgt_br2" 00:29:37.263 14:43:44 -- nvmf/common.sh@158 -- # true 00:29:37.263 14:43:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:37.263 14:43:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:37.263 14:43:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:37.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:37.263 14:43:44 -- nvmf/common.sh@161 -- # true 00:29:37.263 14:43:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:37.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:37.263 14:43:44 -- nvmf/common.sh@162 -- # true 00:29:37.263 14:43:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:37.263 14:43:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:37.263 14:43:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:37.263 14:43:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:37.263 14:43:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:37.263 14:43:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:37.263 14:43:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:37.263 14:43:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:37.263 14:43:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:37.263 14:43:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:37.263 14:43:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:37.263 14:43:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:37.263 14:43:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:37.263 14:43:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:37.263 14:43:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:37.263 14:43:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:37.263 14:43:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:37.263 14:43:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:37.523 14:43:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:37.523 14:43:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:37.523 14:43:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:37.523 14:43:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:37.523 14:43:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:37.523 14:43:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:37.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:29:37.523 00:29:37.523 --- 10.0.0.2 ping statistics --- 00:29:37.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.523 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:37.523 14:43:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:37.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:37.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:29:37.523 00:29:37.523 --- 10.0.0.3 ping statistics --- 00:29:37.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.523 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:37.523 14:43:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:37.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:29:37.523 00:29:37.523 --- 10.0.0.1 ping statistics --- 00:29:37.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.523 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:29:37.523 14:43:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.523 14:43:44 -- nvmf/common.sh@421 -- # return 0 00:29:37.523 14:43:44 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:29:37.523 14:43:44 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:37.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:37.782 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:37.782 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:37.782 14:43:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.782 14:43:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:37.782 14:43:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:37.782 14:43:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.782 14:43:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:37.782 14:43:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:37.782 14:43:44 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:37.782 14:43:44 -- target/dif.sh@137 -- # nvmfappstart 00:29:37.782 14:43:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:37.782 14:43:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.782 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:29:37.782 14:43:44 -- nvmf/common.sh@469 -- # nvmfpid=91966 00:29:37.782 14:43:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:37.782 14:43:44 -- nvmf/common.sh@470 -- # waitforlisten 91966 00:29:37.782 14:43:44 -- common/autotest_common.sh@829 -- # '[' -z 91966 ']' 00:29:37.782 14:43:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.782 14:43:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.782 14:43:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.782 14:43:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.782 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:29:38.041 [2024-12-06 14:43:44.807838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:38.041 [2024-12-06 14:43:44.807939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.041 [2024-12-06 14:43:44.946902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.299 [2024-12-06 14:43:45.037222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:38.299 [2024-12-06 14:43:45.037360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.299 [2024-12-06 14:43:45.037372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.299 [2024-12-06 14:43:45.037380] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.299 [2024-12-06 14:43:45.037434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.865 14:43:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.865 14:43:45 -- common/autotest_common.sh@862 -- # return 0 00:29:38.865 14:43:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:38.865 14:43:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:38.865 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.124 14:43:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.124 14:43:45 -- target/dif.sh@139 -- # create_transport 00:29:39.124 14:43:45 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:39.124 14:43:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.124 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.124 [2024-12-06 14:43:45.884230] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.124 14:43:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.124 14:43:45 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:39.124 14:43:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:39.124 14:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:39.124 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.124 ************************************ 00:29:39.125 START TEST fio_dif_1_default 00:29:39.125 ************************************ 00:29:39.125 14:43:45 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:29:39.125 14:43:45 -- target/dif.sh@86 -- # create_subsystems 0 00:29:39.125 14:43:45 -- target/dif.sh@28 -- # local sub 00:29:39.125 14:43:45 -- target/dif.sh@30 -- # for sub in "$@" 00:29:39.125 14:43:45 -- target/dif.sh@31 -- # create_subsystem 0 00:29:39.125 14:43:45 -- target/dif.sh@18 -- # local sub_id=0 00:29:39.125 14:43:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:39.125 14:43:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.125 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.125 bdev_null0 00:29:39.125 14:43:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.125 14:43:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:39.125 14:43:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.125 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.125 14:43:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.125 14:43:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:39.125 14:43:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.125 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.125 14:43:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.125 14:43:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:39.125 14:43:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.125 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:39.125 [2024-12-06 14:43:45.928378] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.125 14:43:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.125 14:43:45 -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:39.125 14:43:45 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:39.125 14:43:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:39.125 14:43:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.125 14:43:45 -- target/dif.sh@82 -- # gen_fio_conf 00:29:39.125 14:43:45 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.125 14:43:45 -- target/dif.sh@54 -- # local file 00:29:39.125 14:43:45 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:39.125 14:43:45 -- target/dif.sh@56 -- # cat 00:29:39.125 14:43:45 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:39.125 14:43:45 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:39.125 14:43:45 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:39.125 14:43:45 -- common/autotest_common.sh@1330 -- # shift 00:29:39.125 14:43:45 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:39.125 14:43:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.125 14:43:45 -- nvmf/common.sh@520 -- # config=() 00:29:39.125 14:43:45 -- nvmf/common.sh@520 -- # local subsystem config 00:29:39.125 14:43:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:39.125 14:43:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:39.125 { 00:29:39.125 "params": { 00:29:39.125 "name": "Nvme$subsystem", 00:29:39.125 "trtype": "$TEST_TRANSPORT", 00:29:39.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.125 "adrfam": "ipv4", 00:29:39.125 "trsvcid": "$NVMF_PORT", 00:29:39.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.125 "hdgst": ${hdgst:-false}, 00:29:39.125 "ddgst": ${ddgst:-false} 00:29:39.125 }, 00:29:39.125 "method": "bdev_nvme_attach_controller" 00:29:39.125 } 00:29:39.125 EOF 00:29:39.125 )") 00:29:39.125 14:43:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:39.125 14:43:45 -- target/dif.sh@72 -- # (( file <= files )) 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:39.125 14:43:45 -- nvmf/common.sh@542 -- # cat 00:29:39.125 14:43:45 -- nvmf/common.sh@544 -- # jq . 00:29:39.125 14:43:45 -- nvmf/common.sh@545 -- # IFS=, 00:29:39.125 14:43:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:39.125 "params": { 00:29:39.125 "name": "Nvme0", 00:29:39.125 "trtype": "tcp", 00:29:39.125 "traddr": "10.0.0.2", 00:29:39.125 "adrfam": "ipv4", 00:29:39.125 "trsvcid": "4420", 00:29:39.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.125 "hdgst": false, 00:29:39.125 "ddgst": false 00:29:39.125 }, 00:29:39.125 "method": "bdev_nvme_attach_controller" 00:29:39.125 }' 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:29:39.125 14:43:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:29:39.125 14:43:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:29:39.125 14:43:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:29:39.125 14:43:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:29:39.125 14:43:45 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:39.125 14:43:45 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.384 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:39.384 fio-3.35 00:29:39.384 Starting 1 thread 00:29:39.952 [2024-12-06 14:43:46.644171] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:39.952 [2024-12-06 14:43:46.644269] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:49.978 00:29:49.978 filename0: (groupid=0, jobs=1): err= 0: pid=92052: Fri Dec 6 14:43:56 2024 00:29:49.978 read: IOPS=5811, BW=22.7MiB/s (23.8MB/s)(227MiB/10001msec) 00:29:49.978 slat (usec): min=5, max=158, avg= 7.69, stdev= 3.69 00:29:49.978 clat (usec): min=338, max=42428, avg=665.64, stdev=3164.80 00:29:49.978 lat (usec): min=344, max=42441, avg=673.33, stdev=3164.85 00:29:49.978 clat percentiles (usec): 00:29:49.978 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 388], 00:29:49.978 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 424], 00:29:49.978 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 490], 00:29:49.978 | 99.00th=[ 545], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:29:49.978 | 99.99th=[41681] 00:29:49.978 bw ( KiB/s): min= 2656, max=30400, per=98.88%, avg=22983.63, stdev=6282.59, samples=19 00:29:49.978 iops : min= 664, max= 7600, avg=5745.89, stdev=1570.65, samples=19 00:29:49.978 lat (usec) : 500=96.37%, 750=3.01% 00:29:49.978 lat (msec) : 4=0.01%, 50=0.61% 00:29:49.978 cpu : usr=88.39%, sys=9.45%, ctx=20, majf=0, minf=9 00:29:49.978 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:49.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.978 issued rwts: total=58116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:49.978 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:49.978 00:29:49.978 Run status group 0 (all jobs): 00:29:49.978 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=227MiB (238MB), run=10001-10001msec 00:29:50.237 14:43:57 -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:50.237 14:43:57 -- target/dif.sh@43 -- # local sub 00:29:50.237 14:43:57 -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.237 14:43:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:50.237 14:43:57 -- target/dif.sh@36 -- # local sub_id=0 00:29:50.237 14:43:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 00:29:50.237 real 0m11.127s 00:29:50.237 user 0m9.533s 00:29:50.237 sys 0m1.278s 00:29:50.237 14:43:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:50.237 ************************************ 00:29:50.237 END TEST fio_dif_1_default 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 ************************************ 00:29:50.237 14:43:57 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:50.237 14:43:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:50.237 14:43:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 ************************************ 00:29:50.237 START TEST fio_dif_1_multi_subsystems 00:29:50.237 ************************************ 00:29:50.237 14:43:57 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:29:50.237 14:43:57 -- target/dif.sh@92 -- # local files=1 00:29:50.237 14:43:57 -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:50.237 14:43:57 -- target/dif.sh@28 -- # local sub 00:29:50.237 14:43:57 -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.237 14:43:57 -- target/dif.sh@31 -- # create_subsystem 0 00:29:50.237 14:43:57 -- target/dif.sh@18 -- # local sub_id=0 00:29:50.237 14:43:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 bdev_null0 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 [2024-12-06 14:43:57.113708] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.237 14:43:57 -- target/dif.sh@31 -- # create_subsystem 1 00:29:50.237 14:43:57 -- target/dif.sh@18 -- # local sub_id=1 00:29:50.237 14:43:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 bdev_null1 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.237 14:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.237 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 14:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.237 14:43:57 -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:50.237 14:43:57 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:50.237 14:43:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:50.237 14:43:57 -- nvmf/common.sh@520 -- # config=() 00:29:50.237 14:43:57 -- nvmf/common.sh@520 -- # local subsystem config 00:29:50.237 14:43:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:50.237 14:43:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:50.237 { 00:29:50.237 "params": { 00:29:50.237 "name": "Nvme$subsystem", 00:29:50.237 "trtype": "$TEST_TRANSPORT", 00:29:50.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.237 "adrfam": "ipv4", 00:29:50.237 "trsvcid": "$NVMF_PORT", 00:29:50.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.237 "hdgst": ${hdgst:-false}, 00:29:50.237 "ddgst": ${ddgst:-false} 00:29:50.237 }, 00:29:50.237 "method": "bdev_nvme_attach_controller" 00:29:50.237 } 00:29:50.237 EOF 00:29:50.237 )") 00:29:50.237 14:43:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.237 14:43:57 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.237 14:43:57 -- target/dif.sh@82 -- # gen_fio_conf 00:29:50.237 14:43:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:50.237 14:43:57 -- target/dif.sh@54 -- # local file 00:29:50.237 14:43:57 -- target/dif.sh@56 -- # cat 00:29:50.237 14:43:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.237 14:43:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:50.237 14:43:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.237 14:43:57 -- common/autotest_common.sh@1330 -- # shift 00:29:50.237 14:43:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:50.237 14:43:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.237 14:43:57 -- nvmf/common.sh@542 -- # cat 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.237 14:43:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:50.237 14:43:57 -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:50.237 14:43:57 -- target/dif.sh@73 -- # cat 00:29:50.237 14:43:57 -- target/dif.sh@72 -- # (( file++ )) 00:29:50.237 14:43:57 -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.237 14:43:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:50.237 14:43:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:50.237 { 00:29:50.237 "params": { 00:29:50.237 "name": "Nvme$subsystem", 00:29:50.237 "trtype": "$TEST_TRANSPORT", 00:29:50.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.237 "adrfam": "ipv4", 00:29:50.237 "trsvcid": "$NVMF_PORT", 00:29:50.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.237 "hdgst": ${hdgst:-false}, 00:29:50.237 "ddgst": ${ddgst:-false} 00:29:50.237 }, 00:29:50.237 "method": "bdev_nvme_attach_controller" 00:29:50.237 } 00:29:50.237 EOF 00:29:50.237 )") 00:29:50.237 14:43:57 -- nvmf/common.sh@542 -- # cat 00:29:50.237 14:43:57 -- nvmf/common.sh@544 -- # jq . 00:29:50.237 14:43:57 -- nvmf/common.sh@545 -- # IFS=, 00:29:50.237 14:43:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:50.237 "params": { 00:29:50.237 "name": "Nvme0", 00:29:50.237 "trtype": "tcp", 00:29:50.237 "traddr": "10.0.0.2", 00:29:50.237 "adrfam": "ipv4", 00:29:50.237 "trsvcid": "4420", 00:29:50.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.237 "hdgst": false, 00:29:50.237 "ddgst": false 00:29:50.237 }, 00:29:50.237 "method": "bdev_nvme_attach_controller" 00:29:50.237 },{ 00:29:50.237 "params": { 00:29:50.237 "name": "Nvme1", 00:29:50.237 "trtype": "tcp", 00:29:50.237 "traddr": "10.0.0.2", 00:29:50.237 "adrfam": "ipv4", 00:29:50.237 "trsvcid": "4420", 00:29:50.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.237 "hdgst": false, 00:29:50.237 "ddgst": false 00:29:50.237 }, 00:29:50.237 "method": "bdev_nvme_attach_controller" 00:29:50.237 }' 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:29:50.237 14:43:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:29:50.237 14:43:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:29:50.237 14:43:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:50.496 14:43:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:29:50.496 14:43:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:29:50.496 14:43:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:50.496 14:43:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.496 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:50.496 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:50.496 fio-3.35 00:29:50.496 Starting 2 threads 00:29:51.063 [2024-12-06 14:43:57.929352] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:51.063 [2024-12-06 14:43:57.929443] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:03.271 00:30:03.271 filename0: (groupid=0, jobs=1): err= 0: pid=92212: Fri Dec 6 14:44:08 2024 00:30:03.271 read: IOPS=210, BW=843KiB/s (863kB/s)(8448KiB/10027msec) 00:30:03.271 slat (nsec): min=6058, max=61669, avg=8951.46, stdev=4868.84 00:30:03.271 clat (usec): min=356, max=41462, avg=18961.95, stdev=20142.96 00:30:03.271 lat (usec): min=362, max=41472, avg=18970.90, stdev=20142.81 00:30:03.271 clat percentiles (usec): 00:30:03.272 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 408], 00:30:03.272 | 30.00th=[ 424], 40.00th=[ 449], 50.00th=[ 486], 60.00th=[40633], 00:30:03.272 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:03.272 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:03.272 | 99.99th=[41681] 00:30:03.272 bw ( KiB/s): min= 576, max= 1280, per=49.12%, avg=845.47, stdev=181.12, samples=19 00:30:03.272 iops : min= 144, max= 320, avg=211.37, stdev=45.28, samples=19 00:30:03.272 lat (usec) : 500=51.42%, 750=2.56% 00:30:03.272 lat (msec) : 4=0.19%, 50=45.83% 00:30:03.272 cpu : usr=97.52%, sys=2.06%, ctx=9, majf=0, minf=0 00:30:03.272 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.272 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.272 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:03.272 filename1: (groupid=0, jobs=1): err= 0: pid=92213: Fri Dec 6 14:44:08 2024 00:30:03.272 read: IOPS=219, BW=879KiB/s (900kB/s)(8800KiB/10009msec) 00:30:03.272 slat (nsec): min=5198, max=64945, avg=9354.03, stdev=5241.24 00:30:03.272 clat (usec): min=364, max=41531, avg=18168.52, stdev=20062.24 00:30:03.272 lat (usec): min=370, max=41541, avg=18177.87, stdev=20062.19 00:30:03.272 clat percentiles (usec): 00:30:03.272 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 416], 00:30:03.272 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 498], 60.00th=[40633], 00:30:03.272 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:03.272 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:03.272 | 99.99th=[41681] 00:30:03.272 bw ( KiB/s): min= 576, max= 1696, per=50.81%, avg=874.11, stdev=246.27, samples=19 00:30:03.272 iops : min= 144, max= 424, avg=218.53, stdev=61.57, samples=19 00:30:03.272 lat (usec) : 500=50.45%, 750=5.41%, 1000=0.14% 00:30:03.272 lat (msec) : 10=0.18%, 50=43.82% 00:30:03.272 cpu : usr=96.87%, sys=2.69%, ctx=12, majf=0, minf=0 00:30:03.272 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.272 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.272 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:03.272 00:30:03.272 Run status group 0 (all jobs): 00:30:03.272 READ: bw=1720KiB/s (1761kB/s), 843KiB/s-879KiB/s (863kB/s-900kB/s), io=16.8MiB (17.7MB), run=10009-10027msec 00:30:03.272 14:44:08 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:03.272 14:44:08 -- target/dif.sh@43 -- # local sub 00:30:03.272 14:44:08 -- target/dif.sh@45 -- # for sub in "$@" 00:30:03.272 14:44:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:03.272 14:44:08 -- target/dif.sh@36 -- # local sub_id=0 00:30:03.272 14:44:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@45 -- # for sub in "$@" 00:30:03.272 14:44:08 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:03.272 14:44:08 -- target/dif.sh@36 -- # local sub_id=1 00:30:03.272 14:44:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 00:30:03.272 real 0m11.266s 00:30:03.272 user 0m20.325s 00:30:03.272 sys 0m0.780s 00:30:03.272 14:44:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:03.272 ************************************ 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 END TEST fio_dif_1_multi_subsystems 00:30:03.272 ************************************ 00:30:03.272 14:44:08 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:03.272 14:44:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:03.272 14:44:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 ************************************ 00:30:03.272 START TEST fio_dif_rand_params 00:30:03.272 ************************************ 00:30:03.272 14:44:08 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:30:03.272 14:44:08 -- target/dif.sh@100 -- # local NULL_DIF 00:30:03.272 14:44:08 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:03.272 14:44:08 -- target/dif.sh@103 -- # NULL_DIF=3 00:30:03.272 14:44:08 -- target/dif.sh@103 -- # bs=128k 00:30:03.272 14:44:08 -- target/dif.sh@103 -- # numjobs=3 00:30:03.272 14:44:08 -- target/dif.sh@103 -- # iodepth=3 00:30:03.272 14:44:08 -- target/dif.sh@103 -- # runtime=5 00:30:03.272 14:44:08 -- target/dif.sh@105 -- # create_subsystems 0 00:30:03.272 14:44:08 -- target/dif.sh@28 -- # local sub 00:30:03.272 14:44:08 -- target/dif.sh@30 -- # for sub in "$@" 00:30:03.272 14:44:08 -- target/dif.sh@31 -- # create_subsystem 0 00:30:03.272 14:44:08 -- target/dif.sh@18 -- # local sub_id=0 00:30:03.272 14:44:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 bdev_null0 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:03.272 14:44:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.272 14:44:08 -- common/autotest_common.sh@10 -- # set +x 00:30:03.272 [2024-12-06 14:44:08.442910] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.272 14:44:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.272 14:44:08 -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:03.272 14:44:08 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:03.272 14:44:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:03.272 14:44:08 -- nvmf/common.sh@520 -- # config=() 00:30:03.272 14:44:08 -- nvmf/common.sh@520 -- # local subsystem config 00:30:03.272 14:44:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.272 14:44:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:03.272 14:44:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:03.272 { 00:30:03.272 "params": { 00:30:03.272 "name": "Nvme$subsystem", 00:30:03.272 "trtype": "$TEST_TRANSPORT", 00:30:03.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.272 "adrfam": "ipv4", 00:30:03.272 "trsvcid": "$NVMF_PORT", 00:30:03.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.272 "hdgst": ${hdgst:-false}, 00:30:03.272 "ddgst": ${ddgst:-false} 00:30:03.272 }, 00:30:03.272 "method": "bdev_nvme_attach_controller" 00:30:03.272 } 00:30:03.272 EOF 00:30:03.272 )") 00:30:03.272 14:44:08 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.272 14:44:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:03.272 14:44:08 -- target/dif.sh@82 -- # gen_fio_conf 00:30:03.272 14:44:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:03.272 14:44:08 -- target/dif.sh@54 -- # local file 00:30:03.272 14:44:08 -- target/dif.sh@56 -- # cat 00:30:03.272 14:44:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:03.272 14:44:08 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.272 14:44:08 -- common/autotest_common.sh@1330 -- # shift 00:30:03.272 14:44:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:03.272 14:44:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.272 14:44:08 -- nvmf/common.sh@542 -- # cat 00:30:03.272 14:44:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.272 14:44:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:03.272 14:44:08 -- target/dif.sh@72 -- # (( file <= files )) 00:30:03.272 14:44:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:03.272 14:44:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:03.272 14:44:08 -- nvmf/common.sh@544 -- # jq . 00:30:03.272 14:44:08 -- nvmf/common.sh@545 -- # IFS=, 00:30:03.272 14:44:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:03.272 "params": { 00:30:03.272 "name": "Nvme0", 00:30:03.272 "trtype": "tcp", 00:30:03.272 "traddr": "10.0.0.2", 00:30:03.272 "adrfam": "ipv4", 00:30:03.272 "trsvcid": "4420", 00:30:03.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.272 "hdgst": false, 00:30:03.272 "ddgst": false 00:30:03.272 }, 00:30:03.272 "method": "bdev_nvme_attach_controller" 00:30:03.272 }' 00:30:03.272 14:44:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:03.273 14:44:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:03.273 14:44:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.273 14:44:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.273 14:44:08 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:03.273 14:44:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:03.273 14:44:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:03.273 14:44:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:03.273 14:44:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:03.273 14:44:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.273 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:03.273 ... 00:30:03.273 fio-3.35 00:30:03.273 Starting 3 threads 00:30:03.273 [2024-12-06 14:44:09.146179] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:03.273 [2024-12-06 14:44:09.146283] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:07.490 00:30:07.490 filename0: (groupid=0, jobs=1): err= 0: pid=92368: Fri Dec 6 14:44:14 2024 00:30:07.490 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(183MiB/5003msec) 00:30:07.490 slat (usec): min=6, max=109, avg=14.35, stdev= 9.38 00:30:07.490 clat (usec): min=3298, max=53507, avg=10240.38, stdev=5294.99 00:30:07.490 lat (usec): min=3316, max=53514, avg=10254.73, stdev=5296.25 00:30:07.490 clat percentiles (usec): 00:30:07.490 | 1.00th=[ 3720], 5.00th=[ 3916], 10.00th=[ 4047], 20.00th=[ 4490], 00:30:07.490 | 30.00th=[ 7898], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[11600], 00:30:07.490 | 70.00th=[13173], 80.00th=[14484], 90.00th=[15401], 95.00th=[16188], 00:30:07.490 | 99.00th=[21627], 99.50th=[45351], 99.90th=[51643], 99.95th=[53740], 00:30:07.490 | 99.99th=[53740] 00:30:07.490 bw ( KiB/s): min=28416, max=46941, per=41.53%, avg=37385.30, stdev=5388.18, samples=10 00:30:07.490 iops : min= 222, max= 366, avg=292.00, stdev=41.95, samples=10 00:30:07.490 lat (msec) : 4=8.76%, 10=44.25%, 20=45.08%, 50=1.71%, 100=0.21% 00:30:07.490 cpu : usr=93.82%, sys=4.38%, ctx=9, majf=0, minf=0 00:30:07.490 IO depths : 1=15.9%, 2=84.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:07.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.490 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:07.490 filename0: (groupid=0, jobs=1): err= 0: pid=92369: Fri Dec 6 14:44:14 2024 00:30:07.490 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(114MiB/5003msec) 00:30:07.490 slat (nsec): min=6574, max=84442, avg=19611.49, stdev=10643.59 00:30:07.490 clat (usec): min=5726, max=58123, avg=16500.94, stdev=13528.19 00:30:07.490 lat (usec): min=5753, max=58143, avg=16520.55, stdev=13527.98 00:30:07.490 clat percentiles (usec): 00:30:07.490 | 1.00th=[ 6063], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 9110], 00:30:07.490 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12518], 60.00th=[13173], 00:30:07.490 | 70.00th=[13829], 80.00th=[14877], 90.00th=[49021], 95.00th=[52691], 00:30:07.490 | 99.00th=[55837], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:30:07.490 | 99.99th=[57934] 00:30:07.490 bw ( KiB/s): min=17664, max=26624, per=24.77%, avg=22300.44, stdev=3185.75, samples=9 00:30:07.490 iops : min= 138, max= 208, avg=174.22, stdev=24.89, samples=9 00:30:07.490 lat (msec) : 10=26.32%, 20=61.34%, 50=3.85%, 100=8.48% 00:30:07.490 cpu : usr=94.58%, sys=3.78%, ctx=7, majf=0, minf=0 00:30:07.490 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:07.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.490 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:07.490 filename0: (groupid=0, jobs=1): err= 0: pid=92370: Fri Dec 6 14:44:14 2024 00:30:07.490 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5005msec) 00:30:07.490 slat (nsec): min=4161, max=76611, avg=14671.54, stdev=7960.92 00:30:07.490 clat (usec): min=3573, max=54547, avg=13035.12, stdev=11364.01 00:30:07.490 lat (usec): min=3583, max=54566, avg=13049.79, stdev=11364.31 00:30:07.490 clat percentiles (usec): 00:30:07.490 | 1.00th=[ 3982], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 8225], 00:30:07.490 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:30:07.490 | 70.00th=[10945], 80.00th=[11600], 90.00th=[14353], 95.00th=[49546], 00:30:07.490 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:30:07.490 | 99.99th=[54789] 00:30:07.490 bw ( KiB/s): min=23296, max=36352, per=32.28%, avg=29057.67, stdev=4131.38, samples=9 00:30:07.490 iops : min= 182, max= 284, avg=227.00, stdev=32.28, samples=9 00:30:07.490 lat (msec) : 4=1.04%, 10=48.43%, 20=42.09%, 50=3.91%, 100=4.52% 00:30:07.490 cpu : usr=95.18%, sys=3.44%, ctx=5, majf=0, minf=0 00:30:07.490 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:07.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.490 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:07.490 00:30:07.490 Run status group 0 (all jobs): 00:30:07.490 READ: bw=87.9MiB/s (92.2MB/s), 22.7MiB/s-36.5MiB/s (23.8MB/s-38.3MB/s), io=440MiB (461MB), run=5003-5005msec 00:30:07.747 14:44:14 -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:07.747 14:44:14 -- target/dif.sh@43 -- # local sub 00:30:07.747 14:44:14 -- target/dif.sh@45 -- # for sub in "$@" 00:30:07.747 14:44:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:07.747 14:44:14 -- target/dif.sh@36 -- # local sub_id=0 00:30:07.747 14:44:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@109 -- # NULL_DIF=2 00:30:07.747 14:44:14 -- target/dif.sh@109 -- # bs=4k 00:30:07.747 14:44:14 -- target/dif.sh@109 -- # numjobs=8 00:30:07.747 14:44:14 -- target/dif.sh@109 -- # iodepth=16 00:30:07.747 14:44:14 -- target/dif.sh@109 -- # runtime= 00:30:07.747 14:44:14 -- target/dif.sh@109 -- # files=2 00:30:07.747 14:44:14 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:07.747 14:44:14 -- target/dif.sh@28 -- # local sub 00:30:07.747 14:44:14 -- target/dif.sh@30 -- # for sub in "$@" 00:30:07.747 14:44:14 -- target/dif.sh@31 -- # create_subsystem 0 00:30:07.747 14:44:14 -- target/dif.sh@18 -- # local sub_id=0 00:30:07.747 14:44:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 bdev_null0 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 [2024-12-06 14:44:14.545039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@30 -- # for sub in "$@" 00:30:07.747 14:44:14 -- target/dif.sh@31 -- # create_subsystem 1 00:30:07.747 14:44:14 -- target/dif.sh@18 -- # local sub_id=1 00:30:07.747 14:44:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 bdev_null1 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@30 -- # for sub in "$@" 00:30:07.747 14:44:14 -- target/dif.sh@31 -- # create_subsystem 2 00:30:07.747 14:44:14 -- target/dif.sh@18 -- # local sub_id=2 00:30:07.747 14:44:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 bdev_null2 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.747 14:44:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:07.747 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.747 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.747 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.748 14:44:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:07.748 14:44:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.748 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:07.748 14:44:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.748 14:44:14 -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:07.748 14:44:14 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:07.748 14:44:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:07.748 14:44:14 -- nvmf/common.sh@520 -- # config=() 00:30:07.748 14:44:14 -- nvmf/common.sh@520 -- # local subsystem config 00:30:07.748 14:44:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:07.748 14:44:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.748 14:44:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:07.748 { 00:30:07.748 "params": { 00:30:07.748 "name": "Nvme$subsystem", 00:30:07.748 "trtype": "$TEST_TRANSPORT", 00:30:07.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.748 "adrfam": "ipv4", 00:30:07.748 "trsvcid": "$NVMF_PORT", 00:30:07.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.748 "hdgst": ${hdgst:-false}, 00:30:07.748 "ddgst": ${ddgst:-false} 00:30:07.748 }, 00:30:07.748 "method": "bdev_nvme_attach_controller" 00:30:07.748 } 00:30:07.748 EOF 00:30:07.748 )") 00:30:07.748 14:44:14 -- target/dif.sh@82 -- # gen_fio_conf 00:30:07.748 14:44:14 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.748 14:44:14 -- target/dif.sh@54 -- # local file 00:30:07.748 14:44:14 -- target/dif.sh@56 -- # cat 00:30:07.748 14:44:14 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:07.748 14:44:14 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:07.748 14:44:14 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:07.748 14:44:14 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:07.748 14:44:14 -- common/autotest_common.sh@1330 -- # shift 00:30:07.748 14:44:14 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:07.748 14:44:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.748 14:44:14 -- nvmf/common.sh@542 -- # cat 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:07.748 14:44:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:07.748 14:44:14 -- target/dif.sh@72 -- # (( file <= files )) 00:30:07.748 14:44:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:07.748 14:44:14 -- target/dif.sh@73 -- # cat 00:30:07.748 14:44:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:07.748 { 00:30:07.748 "params": { 00:30:07.748 "name": "Nvme$subsystem", 00:30:07.748 "trtype": "$TEST_TRANSPORT", 00:30:07.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.748 "adrfam": "ipv4", 00:30:07.748 "trsvcid": "$NVMF_PORT", 00:30:07.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.748 "hdgst": ${hdgst:-false}, 00:30:07.748 "ddgst": ${ddgst:-false} 00:30:07.748 }, 00:30:07.748 "method": "bdev_nvme_attach_controller" 00:30:07.748 } 00:30:07.748 EOF 00:30:07.748 )") 00:30:07.748 14:44:14 -- nvmf/common.sh@542 -- # cat 00:30:07.748 14:44:14 -- target/dif.sh@72 -- # (( file++ )) 00:30:07.748 14:44:14 -- target/dif.sh@72 -- # (( file <= files )) 00:30:07.748 14:44:14 -- target/dif.sh@73 -- # cat 00:30:07.748 14:44:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:07.748 14:44:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:07.748 { 00:30:07.748 "params": { 00:30:07.748 "name": "Nvme$subsystem", 00:30:07.748 "trtype": "$TEST_TRANSPORT", 00:30:07.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.748 "adrfam": "ipv4", 00:30:07.748 "trsvcid": "$NVMF_PORT", 00:30:07.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.748 "hdgst": ${hdgst:-false}, 00:30:07.748 "ddgst": ${ddgst:-false} 00:30:07.748 }, 00:30:07.748 "method": "bdev_nvme_attach_controller" 00:30:07.748 } 00:30:07.748 EOF 00:30:07.748 )") 00:30:07.748 14:44:14 -- target/dif.sh@72 -- # (( file++ )) 00:30:07.748 14:44:14 -- target/dif.sh@72 -- # (( file <= files )) 00:30:07.748 14:44:14 -- nvmf/common.sh@542 -- # cat 00:30:07.748 14:44:14 -- nvmf/common.sh@544 -- # jq . 00:30:07.748 14:44:14 -- nvmf/common.sh@545 -- # IFS=, 00:30:07.748 14:44:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:07.748 "params": { 00:30:07.748 "name": "Nvme0", 00:30:07.748 "trtype": "tcp", 00:30:07.748 "traddr": "10.0.0.2", 00:30:07.748 "adrfam": "ipv4", 00:30:07.748 "trsvcid": "4420", 00:30:07.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.748 "hdgst": false, 00:30:07.748 "ddgst": false 00:30:07.748 }, 00:30:07.748 "method": "bdev_nvme_attach_controller" 00:30:07.748 },{ 00:30:07.748 "params": { 00:30:07.748 "name": "Nvme1", 00:30:07.748 "trtype": "tcp", 00:30:07.748 "traddr": "10.0.0.2", 00:30:07.748 "adrfam": "ipv4", 00:30:07.748 "trsvcid": "4420", 00:30:07.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.748 "hdgst": false, 00:30:07.748 "ddgst": false 00:30:07.748 }, 00:30:07.748 "method": "bdev_nvme_attach_controller" 00:30:07.748 },{ 00:30:07.748 "params": { 00:30:07.748 "name": "Nvme2", 00:30:07.748 "trtype": "tcp", 00:30:07.748 "traddr": "10.0.0.2", 00:30:07.748 "adrfam": "ipv4", 00:30:07.748 "trsvcid": "4420", 00:30:07.748 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:07.748 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:07.748 "hdgst": false, 00:30:07.748 "ddgst": false 00:30:07.748 }, 00:30:07.748 "method": "bdev_nvme_attach_controller" 00:30:07.748 }' 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:07.748 14:44:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:07.748 14:44:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:07.748 14:44:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:07.748 14:44:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:07.748 14:44:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:07.748 14:44:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.005 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:08.005 ... 00:30:08.005 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:08.005 ... 00:30:08.005 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:08.005 ... 00:30:08.005 fio-3.35 00:30:08.005 Starting 24 threads 00:30:08.571 [2024-12-06 14:44:15.476540] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:08.571 [2024-12-06 14:44:15.476628] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:20.775 00:30:20.775 filename0: (groupid=0, jobs=1): err= 0: pid=92466: Fri Dec 6 14:44:26 2024 00:30:20.775 read: IOPS=256, BW=1027KiB/s (1052kB/s)(10.0MiB/10001msec) 00:30:20.775 slat (usec): min=3, max=4073, avg=20.50, stdev=176.92 00:30:20.775 clat (msec): min=2, max=165, avg=62.19, stdev=29.94 00:30:20.775 lat (msec): min=2, max=165, avg=62.21, stdev=29.94 00:30:20.775 clat percentiles (msec): 00:30:20.775 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 17], 20.00th=[ 34], 00:30:20.775 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 70], 00:30:20.775 | 70.00th=[ 78], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 110], 00:30:20.775 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:30:20.775 | 99.99th=[ 165] 00:30:20.775 bw ( KiB/s): min= 640, max= 1589, per=3.47%, avg=895.42, stdev=217.87, samples=19 00:30:20.775 iops : min= 160, max= 397, avg=223.84, stdev=54.42, samples=19 00:30:20.775 lat (msec) : 4=0.90%, 10=3.47%, 20=9.35%, 50=14.76%, 100=62.93% 00:30:20.775 lat (msec) : 250=8.61% 00:30:20.775 cpu : usr=40.57%, sys=0.57%, ctx=1347, majf=0, minf=9 00:30:20.775 IO depths : 1=1.7%, 2=4.0%, 4=12.1%, 8=70.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:20.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.775 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.775 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.775 filename0: (groupid=0, jobs=1): err= 0: pid=92467: Fri Dec 6 14:44:26 2024 00:30:20.775 read: IOPS=246, BW=984KiB/s (1008kB/s)(9844KiB/10002msec) 00:30:20.775 slat (usec): min=5, max=8093, avg=21.61, stdev=243.36 00:30:20.775 clat (msec): min=7, max=167, avg=64.88, stdev=29.84 00:30:20.775 lat (msec): min=7, max=167, avg=64.90, stdev=29.84 00:30:20.775 clat percentiles (msec): 00:30:20.775 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 39], 00:30:20.775 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 69], 60.00th=[ 72], 00:30:20.776 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 111], 00:30:20.776 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 167], 99.95th=[ 167], 00:30:20.776 | 99.99th=[ 167] 00:30:20.776 bw ( KiB/s): min= 512, max= 1424, per=3.39%, avg=875.37, stdev=214.67, samples=19 00:30:20.776 iops : min= 128, max= 356, avg=218.84, stdev=53.67, samples=19 00:30:20.776 lat (msec) : 10=1.30%, 20=9.83%, 50=18.89%, 100=60.38%, 250=9.59% 00:30:20.776 cpu : usr=33.95%, sys=0.49%, ctx=997, majf=0, minf=9 00:30:20.776 IO depths : 1=2.4%, 2=5.4%, 4=14.4%, 8=66.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.776 filename0: (groupid=0, jobs=1): err= 0: pid=92468: Fri Dec 6 14:44:26 2024 00:30:20.776 read: IOPS=315, BW=1260KiB/s (1291kB/s)(12.4MiB/10076msec) 00:30:20.776 slat (usec): min=3, max=4049, avg=14.08, stdev=84.38 00:30:20.776 clat (msec): min=6, max=129, avg=50.57, stdev=22.76 00:30:20.776 lat (msec): min=6, max=129, avg=50.59, stdev=22.77 00:30:20.776 clat percentiles (msec): 00:30:20.776 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 32], 00:30:20.776 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 55], 00:30:20.776 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 92], 00:30:20.776 | 99.00th=[ 114], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 130], 00:30:20.776 | 99.99th=[ 130] 00:30:20.776 bw ( KiB/s): min= 768, max= 3192, per=4.90%, avg=1265.85, stdev=531.96, samples=20 00:30:20.776 iops : min= 192, max= 798, avg=316.45, stdev=132.99, samples=20 00:30:20.776 lat (msec) : 10=1.73%, 20=8.31%, 50=45.20%, 100=42.08%, 250=2.68% 00:30:20.776 cpu : usr=44.38%, sys=0.67%, ctx=1384, majf=0, minf=9 00:30:20.776 IO depths : 1=0.4%, 2=1.0%, 4=7.1%, 8=78.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 issued rwts: total=3175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.776 filename0: (groupid=0, jobs=1): err= 0: pid=92469: Fri Dec 6 14:44:26 2024 00:30:20.776 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.95MiB/10003msec) 00:30:20.776 slat (usec): min=3, max=8021, avg=18.49, stdev=168.83 00:30:20.776 clat (msec): min=7, max=157, avg=62.70, stdev=30.99 00:30:20.776 lat (msec): min=7, max=157, avg=62.72, stdev=30.99 00:30:20.776 clat percentiles (msec): 00:30:20.776 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 35], 00:30:20.776 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:30:20.776 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 114], 00:30:20.776 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 159], 00:30:20.776 | 99.99th=[ 159] 00:30:20.776 bw ( KiB/s): min= 640, max= 1541, per=3.47%, avg=894.58, stdev=221.01, samples=19 00:30:20.776 iops : min= 160, max= 385, avg=223.63, stdev=55.21, samples=19 00:30:20.776 lat (msec) : 10=1.41%, 20=15.31%, 50=15.08%, 100=58.26%, 250=9.93% 00:30:20.776 cpu : usr=37.45%, sys=0.85%, ctx=1195, majf=0, minf=9 00:30:20.776 IO depths : 1=2.5%, 2=5.5%, 4=14.8%, 8=66.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 issued rwts: total=2547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.776 filename0: (groupid=0, jobs=1): err= 0: pid=92470: Fri Dec 6 14:44:26 2024 00:30:20.776 read: IOPS=277, BW=1110KiB/s (1137kB/s)(10.8MiB/10002msec) 00:30:20.776 slat (usec): min=3, max=4039, avg=15.05, stdev=108.32 00:30:20.776 clat (usec): min=1374, max=165458, avg=57516.29, stdev=29420.85 00:30:20.776 lat (usec): min=1381, max=165468, avg=57531.34, stdev=29422.46 00:30:20.776 clat percentiles (usec): 00:30:20.776 | 1.00th=[ 1483], 5.00th=[ 9765], 10.00th=[ 14877], 20.00th=[ 33817], 00:30:20.776 | 30.00th=[ 44827], 40.00th=[ 54264], 50.00th=[ 58459], 60.00th=[ 63701], 00:30:20.776 | 70.00th=[ 70779], 80.00th=[ 80217], 90.00th=[ 96994], 95.00th=[103285], 00:30:20.776 | 99.00th=[135267], 99.50th=[135267], 99.90th=[152044], 99.95th=[164627], 00:30:20.776 | 99.99th=[164627] 00:30:20.776 bw ( KiB/s): min= 638, max= 1463, per=3.71%, avg=956.05, stdev=215.20, samples=19 00:30:20.776 iops : min= 159, max= 365, avg=238.95, stdev=53.74, samples=19 00:30:20.776 lat (msec) : 2=1.95%, 4=0.94%, 10=2.31%, 20=11.35%, 50=18.91% 00:30:20.776 lat (msec) : 100=57.38%, 250=7.17% 00:30:20.776 cpu : usr=45.04%, sys=0.75%, ctx=1189, majf=0, minf=9 00:30:20.776 IO depths : 1=3.4%, 2=7.1%, 4=17.0%, 8=63.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 issued rwts: total=2776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.776 filename0: (groupid=0, jobs=1): err= 0: pid=92471: Fri Dec 6 14:44:26 2024 00:30:20.776 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10032msec) 00:30:20.776 slat (usec): min=3, max=11996, avg=23.43, stdev=308.68 00:30:20.776 clat (msec): min=19, max=132, avg=61.45, stdev=21.46 00:30:20.776 lat (msec): min=19, max=132, avg=61.47, stdev=21.46 00:30:20.776 clat percentiles (msec): 00:30:20.776 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 41], 00:30:20.776 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 66], 00:30:20.776 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 101], 00:30:20.776 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 133], 99.95th=[ 133], 00:30:20.776 | 99.99th=[ 133] 00:30:20.776 bw ( KiB/s): min= 640, max= 1888, per=4.02%, avg=1036.10, stdev=262.36, samples=20 00:30:20.776 iops : min= 160, max= 472, avg=259.00, stdev=65.57, samples=20 00:30:20.776 lat (msec) : 20=0.61%, 50=32.67%, 100=61.15%, 250=5.57% 00:30:20.776 cpu : usr=41.69%, sys=0.60%, ctx=1211, majf=0, minf=9 00:30:20.776 IO depths : 1=1.8%, 2=3.8%, 4=11.3%, 8=71.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.776 filename0: (groupid=0, jobs=1): err= 0: pid=92472: Fri Dec 6 14:44:26 2024 00:30:20.776 read: IOPS=266, BW=1066KiB/s (1092kB/s)(10.4MiB/10001msec) 00:30:20.776 slat (nsec): min=3863, max=69699, avg=14261.22, stdev=8934.26 00:30:20.776 clat (msec): min=11, max=141, avg=59.93, stdev=27.26 00:30:20.776 lat (msec): min=11, max=141, avg=59.94, stdev=27.26 00:30:20.776 clat percentiles (msec): 00:30:20.776 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 37], 00:30:20.776 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 67], 00:30:20.776 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:30:20.776 | 99.00th=[ 128], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 142], 00:30:20.776 | 99.99th=[ 142] 00:30:20.776 bw ( KiB/s): min= 608, max= 1376, per=3.66%, avg=945.95, stdev=222.83, samples=19 00:30:20.776 iops : min= 152, max= 344, avg=236.42, stdev=55.68, samples=19 00:30:20.776 lat (msec) : 20=13.43%, 50=23.82%, 100=55.78%, 250=6.98% 00:30:20.776 cpu : usr=42.33%, sys=0.63%, ctx=1223, majf=0, minf=9 00:30:20.776 IO depths : 1=1.4%, 2=3.0%, 4=10.0%, 8=72.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:20.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.776 complete : 0=0.0%, 4=90.4%, 8=5.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 issued rwts: total=2666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.777 filename0: (groupid=0, jobs=1): err= 0: pid=92473: Fri Dec 6 14:44:26 2024 00:30:20.777 read: IOPS=253, BW=1014KiB/s (1038kB/s)(9.94MiB/10040msec) 00:30:20.777 slat (usec): min=6, max=10992, avg=28.53, stdev=359.16 00:30:20.777 clat (msec): min=13, max=158, avg=62.80, stdev=26.26 00:30:20.777 lat (msec): min=13, max=158, avg=62.83, stdev=26.27 00:30:20.777 clat percentiles (msec): 00:30:20.777 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 37], 00:30:20.777 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 70], 00:30:20.777 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:30:20.777 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:30:20.777 | 99.99th=[ 159] 00:30:20.777 bw ( KiB/s): min= 640, max= 2136, per=3.93%, avg=1014.85, stdev=380.03, samples=20 00:30:20.777 iops : min= 160, max= 534, avg=253.70, stdev=95.01, samples=20 00:30:20.777 lat (msec) : 20=0.63%, 50=33.63%, 100=58.51%, 250=7.23% 00:30:20.777 cpu : usr=34.69%, sys=0.57%, ctx=974, majf=0, minf=9 00:30:20.777 IO depths : 1=1.5%, 2=3.2%, 4=11.7%, 8=71.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:20.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 issued rwts: total=2545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.777 filename1: (groupid=0, jobs=1): err= 0: pid=92474: Fri Dec 6 14:44:26 2024 00:30:20.777 read: IOPS=287, BW=1148KiB/s (1176kB/s)(11.3MiB/10052msec) 00:30:20.777 slat (usec): min=4, max=8025, avg=20.37, stdev=235.84 00:30:20.777 clat (msec): min=13, max=159, avg=55.50, stdev=20.59 00:30:20.777 lat (msec): min=13, max=159, avg=55.52, stdev=20.59 00:30:20.777 clat percentiles (msec): 00:30:20.777 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 37], 00:30:20.777 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 61], 00:30:20.777 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 93], 00:30:20.777 | 99.00th=[ 108], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:30:20.777 | 99.99th=[ 161] 00:30:20.777 bw ( KiB/s): min= 720, max= 2192, per=4.45%, avg=1147.10, stdev=334.74, samples=20 00:30:20.777 iops : min= 180, max= 548, avg=286.75, stdev=83.69, samples=20 00:30:20.777 lat (msec) : 20=0.69%, 50=44.23%, 100=53.14%, 250=1.94% 00:30:20.777 cpu : usr=36.36%, sys=0.43%, ctx=965, majf=0, minf=9 00:30:20.777 IO depths : 1=0.7%, 2=1.6%, 4=8.7%, 8=76.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:20.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 issued rwts: total=2885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.777 filename1: (groupid=0, jobs=1): err= 0: pid=92475: Fri Dec 6 14:44:26 2024 00:30:20.777 read: IOPS=268, BW=1075KiB/s (1101kB/s)(10.6MiB/10055msec) 00:30:20.777 slat (usec): min=3, max=8026, avg=17.12, stdev=172.56 00:30:20.777 clat (msec): min=14, max=132, avg=59.34, stdev=23.19 00:30:20.777 lat (msec): min=14, max=132, avg=59.35, stdev=23.19 00:30:20.777 clat percentiles (msec): 00:30:20.777 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 37], 00:30:20.777 | 30.00th=[ 46], 40.00th=[ 53], 50.00th=[ 60], 60.00th=[ 64], 00:30:20.777 | 70.00th=[ 70], 80.00th=[ 78], 90.00th=[ 93], 95.00th=[ 101], 00:30:20.777 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 133], 99.95th=[ 133], 00:30:20.777 | 99.99th=[ 133] 00:30:20.777 bw ( KiB/s): min= 736, max= 2096, per=4.16%, avg=1073.60, stdev=334.87, samples=20 00:30:20.777 iops : min= 184, max= 524, avg=268.35, stdev=83.65, samples=20 00:30:20.777 lat (msec) : 20=2.11%, 50=36.01%, 100=56.77%, 250=5.11% 00:30:20.777 cpu : usr=40.17%, sys=0.71%, ctx=1145, majf=0, minf=9 00:30:20.777 IO depths : 1=0.8%, 2=2.0%, 4=9.3%, 8=75.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:20.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 issued rwts: total=2702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.777 filename1: (groupid=0, jobs=1): err= 0: pid=92476: Fri Dec 6 14:44:26 2024 00:30:20.777 read: IOPS=294, BW=1177KiB/s (1205kB/s)(11.5MiB/10048msec) 00:30:20.777 slat (usec): min=3, max=4003, avg=16.05, stdev=112.72 00:30:20.777 clat (msec): min=6, max=131, avg=54.18, stdev=23.62 00:30:20.777 lat (msec): min=6, max=131, avg=54.20, stdev=23.63 00:30:20.777 clat percentiles (msec): 00:30:20.777 | 1.00th=[ 13], 5.00th=[ 19], 10.00th=[ 25], 20.00th=[ 35], 00:30:20.777 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 51], 60.00th=[ 59], 00:30:20.777 | 70.00th=[ 65], 80.00th=[ 74], 90.00th=[ 88], 95.00th=[ 97], 00:30:20.777 | 99.00th=[ 114], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 132], 00:30:20.777 | 99.99th=[ 132] 00:30:20.777 bw ( KiB/s): min= 640, max= 2920, per=4.57%, avg=1178.30, stdev=482.31, samples=20 00:30:20.777 iops : min= 160, max= 730, avg=294.55, stdev=120.59, samples=20 00:30:20.777 lat (msec) : 10=0.47%, 20=5.21%, 50=43.57%, 100=46.35%, 250=4.40% 00:30:20.777 cpu : usr=44.25%, sys=0.65%, ctx=1293, majf=0, minf=9 00:30:20.777 IO depths : 1=1.2%, 2=2.5%, 4=9.2%, 8=74.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:20.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.777 filename1: (groupid=0, jobs=1): err= 0: pid=92477: Fri Dec 6 14:44:26 2024 00:30:20.777 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.4MiB/10095msec) 00:30:20.777 slat (usec): min=3, max=7036, avg=16.04, stdev=156.12 00:30:20.777 clat (usec): min=1284, max=156004, avg=50829.90, stdev=26628.33 00:30:20.777 lat (usec): min=1291, max=156011, avg=50845.94, stdev=26631.40 00:30:20.777 clat percentiles (usec): 00:30:20.777 | 1.00th=[ 1336], 5.00th=[ 1467], 10.00th=[ 16188], 20.00th=[ 31065], 00:30:20.777 | 30.00th=[ 38536], 40.00th=[ 43779], 50.00th=[ 50070], 60.00th=[ 55837], 00:30:20.777 | 70.00th=[ 61604], 80.00th=[ 69731], 90.00th=[ 86508], 95.00th=[ 99091], 00:30:20.777 | 99.00th=[120062], 99.50th=[131597], 99.90th=[156238], 99.95th=[156238], 00:30:20.777 | 99.99th=[156238] 00:30:20.777 bw ( KiB/s): min= 688, max= 4096, per=4.88%, avg=1258.90, stdev=730.92, samples=20 00:30:20.777 iops : min= 172, max= 1024, avg=314.70, stdev=182.73, samples=20 00:30:20.777 lat (msec) : 2=6.51%, 4=1.07%, 10=1.01%, 20=2.21%, 50=38.94% 00:30:20.777 lat (msec) : 100=45.58%, 250=4.68% 00:30:20.777 cpu : usr=41.21%, sys=0.76%, ctx=1313, majf=0, minf=0 00:30:20.777 IO depths : 1=1.6%, 2=3.6%, 4=11.8%, 8=71.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:20.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.777 issued rwts: total=3164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.777 filename1: (groupid=0, jobs=1): err= 0: pid=92478: Fri Dec 6 14:44:26 2024 00:30:20.777 read: IOPS=255, BW=1024KiB/s (1048kB/s)(10.0MiB/10007msec) 00:30:20.777 slat (usec): min=4, max=6465, avg=17.27, stdev=150.26 00:30:20.777 clat (msec): min=6, max=150, avg=62.38, stdev=29.15 00:30:20.777 lat (msec): min=6, max=150, avg=62.40, stdev=29.15 00:30:20.777 clat percentiles (msec): 00:30:20.777 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 39], 00:30:20.777 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 68], 00:30:20.777 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 111], 00:30:20.778 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 150], 00:30:20.778 | 99.99th=[ 150] 00:30:20.778 bw ( KiB/s): min= 512, max= 1393, per=3.53%, avg=910.11, stdev=212.99, samples=19 00:30:20.778 iops : min= 128, max= 348, avg=227.47, stdev=53.24, samples=19 00:30:20.778 lat (msec) : 10=1.84%, 20=12.77%, 50=13.78%, 100=62.44%, 250=9.18% 00:30:20.778 cpu : usr=41.35%, sys=0.77%, ctx=1329, majf=0, minf=9 00:30:20.778 IO depths : 1=1.7%, 2=4.1%, 4=12.1%, 8=70.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:20.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.778 filename1: (groupid=0, jobs=1): err= 0: pid=92479: Fri Dec 6 14:44:26 2024 00:30:20.778 read: IOPS=279, BW=1119KiB/s (1145kB/s)(11.0MiB/10070msec) 00:30:20.778 slat (usec): min=3, max=8045, avg=26.59, stdev=330.14 00:30:20.778 clat (msec): min=5, max=163, avg=57.00, stdev=26.28 00:30:20.778 lat (msec): min=5, max=163, avg=57.03, stdev=26.27 00:30:20.778 clat percentiles (msec): 00:30:20.778 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 36], 00:30:20.778 | 30.00th=[ 39], 40.00th=[ 47], 50.00th=[ 53], 60.00th=[ 61], 00:30:20.778 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 94], 95.00th=[ 108], 00:30:20.778 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 163], 00:30:20.778 | 99.99th=[ 163] 00:30:20.778 bw ( KiB/s): min= 640, max= 2565, per=4.34%, avg=1120.15, stdev=422.66, samples=20 00:30:20.778 iops : min= 160, max= 641, avg=280.00, stdev=105.63, samples=20 00:30:20.778 lat (msec) : 10=1.70%, 20=0.71%, 50=45.70%, 100=45.17%, 250=6.71% 00:30:20.778 cpu : usr=32.90%, sys=0.42%, ctx=893, majf=0, minf=9 00:30:20.778 IO depths : 1=1.0%, 2=2.2%, 4=9.2%, 8=74.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:20.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.778 filename1: (groupid=0, jobs=1): err= 0: pid=92480: Fri Dec 6 14:44:26 2024 00:30:20.778 read: IOPS=251, BW=1004KiB/s (1028kB/s)(9.82MiB/10015msec) 00:30:20.778 slat (usec): min=6, max=7965, avg=15.91, stdev=158.82 00:30:20.778 clat (msec): min=12, max=143, avg=63.64, stdev=22.65 00:30:20.778 lat (msec): min=12, max=143, avg=63.66, stdev=22.65 00:30:20.778 clat percentiles (msec): 00:30:20.778 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 46], 00:30:20.778 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 66], 00:30:20.778 | 70.00th=[ 71], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 105], 00:30:20.778 | 99.00th=[ 126], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:30:20.778 | 99.99th=[ 144] 00:30:20.778 bw ( KiB/s): min= 640, max= 1376, per=3.76%, avg=971.53, stdev=202.26, samples=19 00:30:20.778 iops : min= 160, max= 344, avg=242.84, stdev=50.58, samples=19 00:30:20.778 lat (msec) : 20=1.67%, 50=27.05%, 100=62.41%, 250=8.87% 00:30:20.778 cpu : usr=35.43%, sys=0.57%, ctx=1030, majf=0, minf=9 00:30:20.778 IO depths : 1=1.0%, 2=2.5%, 4=9.7%, 8=73.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:20.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.778 filename1: (groupid=0, jobs=1): err= 0: pid=92481: Fri Dec 6 14:44:26 2024 00:30:20.778 read: IOPS=284, BW=1137KiB/s (1165kB/s)(11.1MiB/10009msec) 00:30:20.778 slat (usec): min=5, max=134, avg=12.27, stdev= 7.69 00:30:20.778 clat (msec): min=6, max=154, avg=56.17, stdev=30.23 00:30:20.778 lat (msec): min=6, max=154, avg=56.18, stdev=30.23 00:30:20.778 clat percentiles (msec): 00:30:20.778 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 24], 00:30:20.778 | 30.00th=[ 41], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 66], 00:30:20.778 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 107], 00:30:20.778 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:30:20.778 | 99.99th=[ 155] 00:30:20.778 bw ( KiB/s): min= 512, max= 1464, per=3.63%, avg=937.32, stdev=222.35, samples=19 00:30:20.778 iops : min= 128, max= 366, avg=234.32, stdev=55.60, samples=19 00:30:20.778 lat (msec) : 10=5.06%, 20=14.34%, 50=18.73%, 100=56.08%, 250=5.80% 00:30:20.778 cpu : usr=40.68%, sys=0.60%, ctx=935, majf=0, minf=9 00:30:20.778 IO depths : 1=0.8%, 2=2.0%, 4=9.5%, 8=75.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:20.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.778 filename2: (groupid=0, jobs=1): err= 0: pid=92482: Fri Dec 6 14:44:26 2024 00:30:20.778 read: IOPS=258, BW=1033KiB/s (1058kB/s)(10.1MiB/10010msec) 00:30:20.778 slat (usec): min=4, max=7063, avg=20.25, stdev=194.50 00:30:20.778 clat (msec): min=10, max=147, avg=61.82, stdev=26.26 00:30:20.778 lat (msec): min=10, max=147, avg=61.84, stdev=26.26 00:30:20.778 clat percentiles (msec): 00:30:20.778 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 40], 00:30:20.778 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 67], 00:30:20.778 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 105], 00:30:20.778 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 148], 00:30:20.778 | 99.99th=[ 148] 00:30:20.778 bw ( KiB/s): min= 640, max= 1262, per=3.60%, avg=929.05, stdev=158.34, samples=19 00:30:20.778 iops : min= 160, max= 315, avg=232.21, stdev=39.53, samples=19 00:30:20.778 lat (msec) : 20=8.58%, 50=20.61%, 100=63.30%, 250=7.50% 00:30:20.778 cpu : usr=44.10%, sys=0.75%, ctx=1294, majf=0, minf=9 00:30:20.778 IO depths : 1=1.8%, 2=3.8%, 4=11.6%, 8=71.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:20.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 issued rwts: total=2586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.778 filename2: (groupid=0, jobs=1): err= 0: pid=92483: Fri Dec 6 14:44:26 2024 00:30:20.778 read: IOPS=306, BW=1226KiB/s (1255kB/s)(12.0MiB/10042msec) 00:30:20.778 slat (nsec): min=3765, max=62123, avg=12067.69, stdev=7056.57 00:30:20.778 clat (msec): min=8, max=142, avg=52.07, stdev=26.31 00:30:20.778 lat (msec): min=8, max=142, avg=52.08, stdev=26.31 00:30:20.778 clat percentiles (msec): 00:30:20.778 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 22], 00:30:20.778 | 30.00th=[ 38], 40.00th=[ 46], 50.00th=[ 55], 60.00th=[ 61], 00:30:20.778 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 00:30:20.778 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:30:20.778 | 99.99th=[ 144] 00:30:20.778 bw ( KiB/s): min= 736, max= 3968, per=4.75%, avg=1226.65, stdev=742.92, samples=20 00:30:20.778 iops : min= 184, max= 992, avg=306.65, stdev=185.73, samples=20 00:30:20.778 lat (msec) : 10=0.94%, 20=17.42%, 50=28.66%, 100=49.89%, 250=3.09% 00:30:20.778 cpu : usr=38.75%, sys=0.66%, ctx=940, majf=0, minf=9 00:30:20.778 IO depths : 1=0.6%, 2=1.3%, 4=5.8%, 8=79.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:20.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.778 issued rwts: total=3077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.778 filename2: (groupid=0, jobs=1): err= 0: pid=92484: Fri Dec 6 14:44:26 2024 00:30:20.778 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.0MiB/10023msec) 00:30:20.778 slat (usec): min=3, max=8040, avg=23.28, stdev=284.69 00:30:20.778 clat (msec): min=18, max=138, avg=62.22, stdev=24.55 00:30:20.779 lat (msec): min=18, max=138, avg=62.24, stdev=24.55 00:30:20.779 clat percentiles (msec): 00:30:20.779 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 42], 00:30:20.779 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 68], 00:30:20.779 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 110], 00:30:20.779 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:30:20.779 | 99.99th=[ 140] 00:30:20.779 bw ( KiB/s): min= 640, max= 1768, per=3.80%, avg=981.05, stdev=268.79, samples=19 00:30:20.779 iops : min= 160, max= 442, avg=245.26, stdev=67.20, samples=19 00:30:20.779 lat (msec) : 20=0.39%, 50=35.55%, 100=57.14%, 250=6.92% 00:30:20.779 cpu : usr=37.32%, sys=0.47%, ctx=909, majf=0, minf=9 00:30:20.779 IO depths : 1=1.6%, 2=3.2%, 4=10.3%, 8=73.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.779 filename2: (groupid=0, jobs=1): err= 0: pid=92485: Fri Dec 6 14:44:26 2024 00:30:20.779 read: IOPS=243, BW=974KiB/s (998kB/s)(9744KiB/10002msec) 00:30:20.779 slat (usec): min=3, max=8026, avg=21.69, stdev=243.44 00:30:20.779 clat (msec): min=9, max=161, avg=65.57, stdev=28.89 00:30:20.779 lat (msec): min=9, max=161, avg=65.59, stdev=28.89 00:30:20.779 clat percentiles (msec): 00:30:20.779 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 44], 00:30:20.779 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:30:20.779 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 102], 95.00th=[ 112], 00:30:20.779 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 161], 00:30:20.779 | 99.99th=[ 161] 00:30:20.779 bw ( KiB/s): min= 512, max= 1400, per=3.38%, avg=873.26, stdev=202.46, samples=19 00:30:20.779 iops : min= 128, max= 350, avg=218.32, stdev=50.62, samples=19 00:30:20.779 lat (msec) : 10=0.33%, 20=11.12%, 50=14.74%, 100=63.05%, 250=10.76% 00:30:20.779 cpu : usr=33.90%, sys=0.41%, ctx=1000, majf=0, minf=9 00:30:20.779 IO depths : 1=2.8%, 2=6.4%, 4=16.6%, 8=64.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:30:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 issued rwts: total=2436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.779 filename2: (groupid=0, jobs=1): err= 0: pid=92486: Fri Dec 6 14:44:26 2024 00:30:20.779 read: IOPS=262, BW=1051KiB/s (1076kB/s)(10.3MiB/10049msec) 00:30:20.779 slat (usec): min=6, max=12029, avg=26.37, stdev=351.64 00:30:20.779 clat (msec): min=10, max=150, avg=60.55, stdev=23.37 00:30:20.779 lat (msec): min=10, max=150, avg=60.58, stdev=23.38 00:30:20.779 clat percentiles (msec): 00:30:20.779 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 41], 00:30:20.779 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 65], 00:30:20.779 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 102], 00:30:20.779 | 99.00th=[ 116], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 150], 00:30:20.779 | 99.99th=[ 150] 00:30:20.779 bw ( KiB/s): min= 640, max= 2176, per=4.07%, avg=1050.00, stdev=362.22, samples=20 00:30:20.779 iops : min= 160, max= 544, avg=262.45, stdev=90.50, samples=20 00:30:20.779 lat (msec) : 20=0.76%, 50=35.14%, 100=58.31%, 250=5.79% 00:30:20.779 cpu : usr=41.15%, sys=0.65%, ctx=1279, majf=0, minf=9 00:30:20.779 IO depths : 1=2.0%, 2=4.5%, 4=13.8%, 8=68.6%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 issued rwts: total=2641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.779 filename2: (groupid=0, jobs=1): err= 0: pid=92487: Fri Dec 6 14:44:26 2024 00:30:20.779 read: IOPS=270, BW=1080KiB/s (1106kB/s)(10.6MiB/10001msec) 00:30:20.779 slat (usec): min=3, max=3785, avg=14.21, stdev=73.10 00:30:20.779 clat (msec): min=4, max=174, avg=59.13, stdev=33.93 00:30:20.779 lat (msec): min=4, max=174, avg=59.15, stdev=33.93 00:30:20.779 clat percentiles (msec): 00:30:20.779 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 17], 00:30:20.779 | 30.00th=[ 42], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 70], 00:30:20.779 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 112], 00:30:20.779 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 176], 99.95th=[ 176], 00:30:20.779 | 99.99th=[ 176] 00:30:20.779 bw ( KiB/s): min= 512, max= 2018, per=3.51%, avg=906.63, stdev=326.49, samples=19 00:30:20.779 iops : min= 128, max= 504, avg=226.63, stdev=81.53, samples=19 00:30:20.779 lat (msec) : 10=5.63%, 20=15.33%, 50=15.73%, 100=52.91%, 250=10.40% 00:30:20.779 cpu : usr=37.52%, sys=0.54%, ctx=989, majf=0, minf=9 00:30:20.779 IO depths : 1=2.2%, 2=4.7%, 4=13.4%, 8=68.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:30:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 issued rwts: total=2701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.779 filename2: (groupid=0, jobs=1): err= 0: pid=92488: Fri Dec 6 14:44:26 2024 00:30:20.779 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.3MiB/10081msec) 00:30:20.779 slat (usec): min=5, max=7138, avg=20.21, stdev=227.62 00:30:20.779 clat (msec): min=18, max=155, avg=60.75, stdev=23.62 00:30:20.779 lat (msec): min=18, max=155, avg=60.77, stdev=23.61 00:30:20.779 clat percentiles (msec): 00:30:20.779 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 39], 00:30:20.779 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 64], 00:30:20.779 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 94], 95.00th=[ 105], 00:30:20.779 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 157], 99.95th=[ 157], 00:30:20.779 | 99.99th=[ 157] 00:30:20.779 bw ( KiB/s): min= 640, max= 2096, per=4.07%, avg=1050.60, stdev=321.70, samples=20 00:30:20.779 iops : min= 160, max= 524, avg=262.60, stdev=80.38, samples=20 00:30:20.779 lat (msec) : 20=0.57%, 50=37.10%, 100=55.64%, 250=6.69% 00:30:20.779 cpu : usr=33.41%, sys=0.55%, ctx=1007, majf=0, minf=9 00:30:20.779 IO depths : 1=0.8%, 2=1.7%, 4=9.8%, 8=74.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.779 issued rwts: total=2644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.779 filename2: (groupid=0, jobs=1): err= 0: pid=92489: Fri Dec 6 14:44:26 2024 00:30:20.779 read: IOPS=265, BW=1060KiB/s (1086kB/s)(10.4MiB/10003msec) 00:30:20.779 slat (usec): min=4, max=8061, avg=19.75, stdev=191.99 00:30:20.779 clat (msec): min=12, max=160, avg=60.23, stdev=27.26 00:30:20.779 lat (msec): min=13, max=160, avg=60.25, stdev=27.26 00:30:20.779 clat percentiles (msec): 00:30:20.779 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 40], 00:30:20.779 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:30:20.779 | 70.00th=[ 71], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 106], 00:30:20.779 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 161], 99.95th=[ 161], 00:30:20.779 | 99.99th=[ 161] 00:30:20.780 bw ( KiB/s): min= 640, max= 1376, per=3.62%, avg=934.74, stdev=186.56, samples=19 00:30:20.780 iops : min= 160, max= 344, avg=233.68, stdev=46.64, samples=19 00:30:20.780 lat (msec) : 20=13.88%, 50=17.16%, 100=60.86%, 250=8.11% 00:30:20.780 cpu : usr=44.97%, sys=0.73%, ctx=1319, majf=0, minf=9 00:30:20.780 IO depths : 1=2.7%, 2=5.9%, 4=16.1%, 8=65.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:30:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.780 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.780 issued rwts: total=2652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:20.780 00:30:20.780 Run status group 0 (all jobs): 00:30:20.780 READ: bw=25.2MiB/s (26.4MB/s), 974KiB/s-1260KiB/s (998kB/s-1291kB/s), io=254MiB (267MB), run=10001-10095msec 00:30:20.780 14:44:26 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:20.780 14:44:26 -- target/dif.sh@43 -- # local sub 00:30:20.780 14:44:26 -- target/dif.sh@45 -- # for sub in "$@" 00:30:20.780 14:44:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:20.780 14:44:26 -- target/dif.sh@36 -- # local sub_id=0 00:30:20.780 14:44:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@45 -- # for sub in "$@" 00:30:20.780 14:44:26 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:20.780 14:44:26 -- target/dif.sh@36 -- # local sub_id=1 00:30:20.780 14:44:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@45 -- # for sub in "$@" 00:30:20.780 14:44:26 -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:20.780 14:44:26 -- target/dif.sh@36 -- # local sub_id=2 00:30:20.780 14:44:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@115 -- # NULL_DIF=1 00:30:20.780 14:44:26 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:20.780 14:44:26 -- target/dif.sh@115 -- # numjobs=2 00:30:20.780 14:44:26 -- target/dif.sh@115 -- # iodepth=8 00:30:20.780 14:44:26 -- target/dif.sh@115 -- # runtime=5 00:30:20.780 14:44:26 -- target/dif.sh@115 -- # files=1 00:30:20.780 14:44:26 -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:20.780 14:44:26 -- target/dif.sh@28 -- # local sub 00:30:20.780 14:44:26 -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.780 14:44:26 -- target/dif.sh@31 -- # create_subsystem 0 00:30:20.780 14:44:26 -- target/dif.sh@18 -- # local sub_id=0 00:30:20.780 14:44:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 bdev_null0 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 [2024-12-06 14:44:26.827198] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.780 14:44:26 -- target/dif.sh@31 -- # create_subsystem 1 00:30:20.780 14:44:26 -- target/dif.sh@18 -- # local sub_id=1 00:30:20.780 14:44:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 bdev_null1 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.780 14:44:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.780 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:30:20.780 14:44:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.780 14:44:26 -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:20.780 14:44:26 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:20.780 14:44:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:20.780 14:44:26 -- nvmf/common.sh@520 -- # config=() 00:30:20.780 14:44:26 -- nvmf/common.sh@520 -- # local subsystem config 00:30:20.780 14:44:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:20.780 14:44:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.780 14:44:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:20.780 { 00:30:20.780 "params": { 00:30:20.780 "name": "Nvme$subsystem", 00:30:20.780 "trtype": "$TEST_TRANSPORT", 00:30:20.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.780 "adrfam": "ipv4", 00:30:20.780 "trsvcid": "$NVMF_PORT", 00:30:20.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.780 "hdgst": ${hdgst:-false}, 00:30:20.781 "ddgst": ${ddgst:-false} 00:30:20.781 }, 00:30:20.781 "method": "bdev_nvme_attach_controller" 00:30:20.781 } 00:30:20.781 EOF 00:30:20.781 )") 00:30:20.781 14:44:26 -- target/dif.sh@82 -- # gen_fio_conf 00:30:20.781 14:44:26 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.781 14:44:26 -- target/dif.sh@54 -- # local file 00:30:20.781 14:44:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:20.781 14:44:26 -- target/dif.sh@56 -- # cat 00:30:20.781 14:44:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.781 14:44:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:20.781 14:44:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:20.781 14:44:26 -- common/autotest_common.sh@1330 -- # shift 00:30:20.781 14:44:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:20.781 14:44:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.781 14:44:26 -- nvmf/common.sh@542 -- # cat 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:20.781 14:44:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:20.781 14:44:26 -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.781 14:44:26 -- target/dif.sh@73 -- # cat 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:20.781 14:44:26 -- target/dif.sh@72 -- # (( file++ )) 00:30:20.781 14:44:26 -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.781 14:44:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:20.781 14:44:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:20.781 { 00:30:20.781 "params": { 00:30:20.781 "name": "Nvme$subsystem", 00:30:20.781 "trtype": "$TEST_TRANSPORT", 00:30:20.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.781 "adrfam": "ipv4", 00:30:20.781 "trsvcid": "$NVMF_PORT", 00:30:20.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.781 "hdgst": ${hdgst:-false}, 00:30:20.781 "ddgst": ${ddgst:-false} 00:30:20.781 }, 00:30:20.781 "method": "bdev_nvme_attach_controller" 00:30:20.781 } 00:30:20.781 EOF 00:30:20.781 )") 00:30:20.781 14:44:26 -- nvmf/common.sh@542 -- # cat 00:30:20.781 14:44:26 -- nvmf/common.sh@544 -- # jq . 00:30:20.781 14:44:26 -- nvmf/common.sh@545 -- # IFS=, 00:30:20.781 14:44:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:20.781 "params": { 00:30:20.781 "name": "Nvme0", 00:30:20.781 "trtype": "tcp", 00:30:20.781 "traddr": "10.0.0.2", 00:30:20.781 "adrfam": "ipv4", 00:30:20.781 "trsvcid": "4420", 00:30:20.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.781 "hdgst": false, 00:30:20.781 "ddgst": false 00:30:20.781 }, 00:30:20.781 "method": "bdev_nvme_attach_controller" 00:30:20.781 },{ 00:30:20.781 "params": { 00:30:20.781 "name": "Nvme1", 00:30:20.781 "trtype": "tcp", 00:30:20.781 "traddr": "10.0.0.2", 00:30:20.781 "adrfam": "ipv4", 00:30:20.781 "trsvcid": "4420", 00:30:20.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:20.781 "hdgst": false, 00:30:20.781 "ddgst": false 00:30:20.781 }, 00:30:20.781 "method": "bdev_nvme_attach_controller" 00:30:20.781 }' 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:20.781 14:44:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:20.781 14:44:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:20.781 14:44:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:20.781 14:44:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:20.781 14:44:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:20.781 14:44:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.781 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:20.781 ... 00:30:20.781 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:20.781 ... 00:30:20.781 fio-3.35 00:30:20.781 Starting 4 threads 00:30:20.781 [2024-12-06 14:44:27.655662] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:20.781 [2024-12-06 14:44:27.655746] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:26.051 00:30:26.051 filename0: (groupid=0, jobs=1): err= 0: pid=92622: Fri Dec 6 14:44:32 2024 00:30:26.051 read: IOPS=2056, BW=16.1MiB/s (16.8MB/s)(80.4MiB/5004msec) 00:30:26.051 slat (nsec): min=6010, max=87243, avg=10810.94, stdev=7535.73 00:30:26.051 clat (usec): min=1009, max=11002, avg=3832.24, stdev=488.30 00:30:26.051 lat (usec): min=1042, max=11009, avg=3843.05, stdev=487.92 00:30:26.051 clat percentiles (usec): 00:30:26.051 | 1.00th=[ 2114], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:30:26.051 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:30:26.051 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4555], 00:30:26.051 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 8979], 00:30:26.051 | 99.99th=[ 8979] 00:30:26.051 bw ( KiB/s): min=15232, max=17024, per=25.07%, avg=16460.80, stdev=526.72, samples=10 00:30:26.051 iops : min= 1904, max= 2128, avg=2057.60, stdev=65.84, samples=10 00:30:26.051 lat (msec) : 2=0.31%, 4=82.78%, 10=16.90%, 20=0.01% 00:30:26.051 cpu : usr=95.56%, sys=3.22%, ctx=10, majf=0, minf=9 00:30:26.051 IO depths : 1=9.5%, 2=25.0%, 4=50.0%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.051 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.051 issued rwts: total=10289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.051 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:26.051 filename0: (groupid=0, jobs=1): err= 0: pid=92623: Fri Dec 6 14:44:32 2024 00:30:26.052 read: IOPS=2052, BW=16.0MiB/s (16.8MB/s)(80.2MiB/5003msec) 00:30:26.052 slat (usec): min=6, max=119, avg=11.93, stdev= 5.73 00:30:26.052 clat (usec): min=2125, max=14286, avg=3843.68, stdev=446.20 00:30:26.052 lat (usec): min=2136, max=14294, avg=3855.62, stdev=446.01 00:30:26.052 clat percentiles (usec): 00:30:26.052 | 1.00th=[ 3130], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:30:26.052 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:30:26.052 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4113], 95.00th=[ 4424], 00:30:26.052 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7767], 99.95th=[11731], 00:30:26.052 | 99.99th=[11731] 00:30:26.052 bw ( KiB/s): min=15104, max=17024, per=25.01%, avg=16422.40, stdev=543.33, samples=10 00:30:26.052 iops : min= 1888, max= 2128, avg=2052.80, stdev=67.92, samples=10 00:30:26.052 lat (msec) : 4=83.61%, 10=16.31%, 20=0.08% 00:30:26.052 cpu : usr=95.22%, sys=3.52%, ctx=56, majf=0, minf=10 00:30:26.052 IO depths : 1=7.2%, 2=25.0%, 4=50.0%, 8=17.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.052 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.052 issued rwts: total=10271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.052 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:26.052 filename1: (groupid=0, jobs=1): err= 0: pid=92624: Fri Dec 6 14:44:32 2024 00:30:26.052 read: IOPS=2046, BW=16.0MiB/s (16.8MB/s)(80.0MiB/5003msec) 00:30:26.052 slat (usec): min=6, max=353, avg=13.25, stdev= 7.58 00:30:26.052 clat (usec): min=1325, max=12087, avg=3857.76, stdev=488.19 00:30:26.052 lat (usec): min=1335, max=12098, avg=3871.01, stdev=488.09 00:30:26.052 clat percentiles (usec): 00:30:26.052 | 1.00th=[ 2868], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:30:26.052 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:30:26.052 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4228], 95.00th=[ 4621], 00:30:26.052 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 7767], 99.95th=[11863], 00:30:26.052 | 99.99th=[12125] 00:30:26.052 bw ( KiB/s): min=15104, max=17040, per=24.94%, avg=16374.50, stdev=560.93, samples=10 00:30:26.052 iops : min= 1888, max= 2130, avg=2046.80, stdev=70.11, samples=10 00:30:26.052 lat (msec) : 2=0.03%, 4=81.46%, 10=18.44%, 20=0.08% 00:30:26.052 cpu : usr=94.96%, sys=3.48%, ctx=52, majf=0, minf=9 00:30:26.052 IO depths : 1=1.4%, 2=16.3%, 4=58.4%, 8=23.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.052 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.052 issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.052 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:26.052 filename1: (groupid=0, jobs=1): err= 0: pid=92625: Fri Dec 6 14:44:32 2024 00:30:26.052 read: IOPS=2053, BW=16.0MiB/s (16.8MB/s)(80.2MiB/5002msec) 00:30:26.052 slat (nsec): min=6262, max=69998, avg=10390.20, stdev=6591.06 00:30:26.052 clat (usec): min=2155, max=13502, avg=3841.29, stdev=442.92 00:30:26.052 lat (usec): min=2177, max=13516, avg=3851.69, stdev=442.78 00:30:26.052 clat percentiles (usec): 00:30:26.052 | 1.00th=[ 2966], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3621], 00:30:26.052 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:30:26.052 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4113], 95.00th=[ 4424], 00:30:26.052 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7373], 99.95th=[11731], 00:30:26.052 | 99.99th=[11731] 00:30:26.052 bw ( KiB/s): min=15232, max=17024, per=24.93%, avg=16369.78, stdev=497.64, samples=9 00:30:26.052 iops : min= 1904, max= 2128, avg=2046.22, stdev=62.20, samples=9 00:30:26.052 lat (msec) : 4=83.36%, 10=16.56%, 20=0.08% 00:30:26.052 cpu : usr=95.00%, sys=3.78%, ctx=4, majf=0, minf=9 00:30:26.052 IO depths : 1=10.6%, 2=25.0%, 4=50.0%, 8=14.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.052 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.052 issued rwts: total=10272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.052 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:26.052 00:30:26.052 Run status group 0 (all jobs): 00:30:26.052 READ: bw=64.1MiB/s (67.2MB/s), 16.0MiB/s-16.1MiB/s (16.8MB/s-16.8MB/s), io=321MiB (336MB), run=5002-5004msec 00:30:26.052 14:44:33 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:26.052 14:44:33 -- target/dif.sh@43 -- # local sub 00:30:26.052 14:44:33 -- target/dif.sh@45 -- # for sub in "$@" 00:30:26.052 14:44:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:26.052 14:44:33 -- target/dif.sh@36 -- # local sub_id=0 00:30:26.052 14:44:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:26.052 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.052 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@45 -- # for sub in "$@" 00:30:26.311 14:44:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:26.311 14:44:33 -- target/dif.sh@36 -- # local sub_id=1 00:30:26.311 14:44:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 00:30:26.311 real 0m24.637s 00:30:26.311 user 2m14.600s 00:30:26.311 sys 0m3.703s 00:30:26.311 14:44:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:26.311 ************************************ 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 END TEST fio_dif_rand_params 00:30:26.311 ************************************ 00:30:26.311 14:44:33 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:26.311 14:44:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:26.311 14:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 ************************************ 00:30:26.311 START TEST fio_dif_digest 00:30:26.311 ************************************ 00:30:26.311 14:44:33 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:30:26.311 14:44:33 -- target/dif.sh@123 -- # local NULL_DIF 00:30:26.311 14:44:33 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:26.311 14:44:33 -- target/dif.sh@125 -- # local hdgst ddgst 00:30:26.311 14:44:33 -- target/dif.sh@127 -- # NULL_DIF=3 00:30:26.311 14:44:33 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:26.311 14:44:33 -- target/dif.sh@127 -- # numjobs=3 00:30:26.311 14:44:33 -- target/dif.sh@127 -- # iodepth=3 00:30:26.311 14:44:33 -- target/dif.sh@127 -- # runtime=10 00:30:26.311 14:44:33 -- target/dif.sh@128 -- # hdgst=true 00:30:26.311 14:44:33 -- target/dif.sh@128 -- # ddgst=true 00:30:26.311 14:44:33 -- target/dif.sh@130 -- # create_subsystems 0 00:30:26.311 14:44:33 -- target/dif.sh@28 -- # local sub 00:30:26.311 14:44:33 -- target/dif.sh@30 -- # for sub in "$@" 00:30:26.311 14:44:33 -- target/dif.sh@31 -- # create_subsystem 0 00:30:26.311 14:44:33 -- target/dif.sh@18 -- # local sub_id=0 00:30:26.311 14:44:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 bdev_null0 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:26.311 14:44:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.311 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:26.311 [2024-12-06 14:44:33.141621] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.311 14:44:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.311 14:44:33 -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:26.311 14:44:33 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:26.311 14:44:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:26.311 14:44:33 -- nvmf/common.sh@520 -- # config=() 00:30:26.311 14:44:33 -- nvmf/common.sh@520 -- # local subsystem config 00:30:26.311 14:44:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:26.311 14:44:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.311 14:44:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:26.311 { 00:30:26.311 "params": { 00:30:26.311 "name": "Nvme$subsystem", 00:30:26.311 "trtype": "$TEST_TRANSPORT", 00:30:26.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.311 "adrfam": "ipv4", 00:30:26.311 "trsvcid": "$NVMF_PORT", 00:30:26.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.311 "hdgst": ${hdgst:-false}, 00:30:26.311 "ddgst": ${ddgst:-false} 00:30:26.311 }, 00:30:26.311 "method": "bdev_nvme_attach_controller" 00:30:26.311 } 00:30:26.311 EOF 00:30:26.311 )") 00:30:26.311 14:44:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.311 14:44:33 -- target/dif.sh@82 -- # gen_fio_conf 00:30:26.311 14:44:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:26.311 14:44:33 -- target/dif.sh@54 -- # local file 00:30:26.311 14:44:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:26.311 14:44:33 -- target/dif.sh@56 -- # cat 00:30:26.311 14:44:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:26.311 14:44:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:26.311 14:44:33 -- common/autotest_common.sh@1330 -- # shift 00:30:26.311 14:44:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:26.311 14:44:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.312 14:44:33 -- nvmf/common.sh@542 -- # cat 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:26.312 14:44:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:26.312 14:44:33 -- target/dif.sh@72 -- # (( file <= files )) 00:30:26.312 14:44:33 -- nvmf/common.sh@544 -- # jq . 00:30:26.312 14:44:33 -- nvmf/common.sh@545 -- # IFS=, 00:30:26.312 14:44:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:26.312 "params": { 00:30:26.312 "name": "Nvme0", 00:30:26.312 "trtype": "tcp", 00:30:26.312 "traddr": "10.0.0.2", 00:30:26.312 "adrfam": "ipv4", 00:30:26.312 "trsvcid": "4420", 00:30:26.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:26.312 "hdgst": true, 00:30:26.312 "ddgst": true 00:30:26.312 }, 00:30:26.312 "method": "bdev_nvme_attach_controller" 00:30:26.312 }' 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:26.312 14:44:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:26.312 14:44:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:26.312 14:44:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:26.312 14:44:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:26.312 14:44:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:26.312 14:44:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.570 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:26.570 ... 00:30:26.570 fio-3.35 00:30:26.570 Starting 3 threads 00:30:26.830 [2024-12-06 14:44:33.746373] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:26.830 [2024-12-06 14:44:33.746481] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:39.051 00:30:39.051 filename0: (groupid=0, jobs=1): err= 0: pid=92731: Fri Dec 6 14:44:43 2024 00:30:39.051 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(254MiB/10045msec) 00:30:39.051 slat (nsec): min=6679, max=71059, avg=14349.49, stdev=6197.74 00:30:39.051 clat (usec): min=8112, max=46881, avg=14814.76, stdev=3133.91 00:30:39.051 lat (usec): min=8131, max=46898, avg=14829.10, stdev=3133.00 00:30:39.051 clat percentiles (usec): 00:30:39.051 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[11863], 00:30:39.051 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15270], 60.00th=[15795], 00:30:39.051 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:30:39.051 | 99.00th=[21365], 99.50th=[22414], 99.90th=[24773], 99.95th=[44827], 00:30:39.051 | 99.99th=[46924] 00:30:39.051 bw ( KiB/s): min=22528, max=30720, per=30.14%, avg=25734.74, stdev=2543.40, samples=19 00:30:39.051 iops : min= 176, max= 240, avg=201.05, stdev=19.87, samples=19 00:30:39.051 lat (msec) : 10=12.47%, 20=85.41%, 50=2.12% 00:30:39.051 cpu : usr=94.48%, sys=4.22%, ctx=6, majf=0, minf=9 00:30:39.051 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.051 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.051 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:39.051 filename0: (groupid=0, jobs=1): err= 0: pid=92732: Fri Dec 6 14:44:43 2024 00:30:39.051 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10005msec) 00:30:39.051 slat (usec): min=6, max=345, avg=17.18, stdev=11.28 00:30:39.051 clat (usec): min=6703, max=53878, avg=12395.58, stdev=2846.37 00:30:39.051 lat (usec): min=6715, max=53888, avg=12412.76, stdev=2846.23 00:30:39.051 clat percentiles (usec): 00:30:39.051 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[10290], 00:30:39.051 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13042], 00:30:39.051 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15139], 95.00th=[15926], 00:30:39.051 | 99.00th=[18482], 99.50th=[19268], 99.90th=[51119], 99.95th=[53740], 00:30:39.051 | 99.99th=[53740] 00:30:39.051 bw ( KiB/s): min=26368, max=34816, per=36.04%, avg=30777.05, stdev=2220.07, samples=19 00:30:39.051 iops : min= 206, max= 272, avg=240.42, stdev=17.35, samples=19 00:30:39.051 lat (msec) : 10=19.07%, 20=80.80%, 100=0.12% 00:30:39.051 cpu : usr=92.94%, sys=5.10%, ctx=99, majf=0, minf=9 00:30:39.051 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.051 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.051 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:39.051 filename0: (groupid=0, jobs=1): err= 0: pid=92733: Fri Dec 6 14:44:43 2024 00:30:39.051 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10004msec) 00:30:39.051 slat (nsec): min=6576, max=84140, avg=13217.69, stdev=5603.66 00:30:39.051 clat (usec): min=6649, max=53963, avg=13291.63, stdev=8599.88 00:30:39.051 lat (usec): min=6659, max=53975, avg=13304.85, stdev=8599.87 00:30:39.051 clat percentiles (usec): 00:30:39.051 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10552], 00:30:39.051 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:30:39.051 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13566], 95.00th=[16188], 00:30:39.051 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:30:39.051 | 99.99th=[53740] 00:30:39.051 bw ( KiB/s): min=19968, max=34304, per=33.84%, avg=28897.58, stdev=4121.50, samples=19 00:30:39.052 iops : min= 156, max= 268, avg=225.74, stdev=32.17, samples=19 00:30:39.052 lat (msec) : 10=9.27%, 20=86.03%, 50=0.27%, 100=4.43% 00:30:39.052 cpu : usr=93.91%, sys=4.70%, ctx=7, majf=0, minf=9 00:30:39.052 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.052 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:39.052 00:30:39.052 Run status group 0 (all jobs): 00:30:39.052 READ: bw=83.4MiB/s (87.4MB/s), 25.2MiB/s-30.2MiB/s (26.5MB/s-31.7MB/s), io=838MiB (878MB), run=10004-10045msec 00:30:39.052 14:44:44 -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:39.052 14:44:44 -- target/dif.sh@43 -- # local sub 00:30:39.052 14:44:44 -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.052 14:44:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:39.052 14:44:44 -- target/dif.sh@36 -- # local sub_id=0 00:30:39.052 14:44:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.052 14:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.052 14:44:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.052 14:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.052 14:44:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:39.052 14:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.052 14:44:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.052 14:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.052 00:30:39.052 real 0m11.058s 00:30:39.052 user 0m28.889s 00:30:39.052 sys 0m1.678s 00:30:39.052 14:44:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:39.052 14:44:44 -- common/autotest_common.sh@10 -- # set +x 00:30:39.052 ************************************ 00:30:39.052 END TEST fio_dif_digest 00:30:39.052 ************************************ 00:30:39.052 14:44:44 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:39.052 14:44:44 -- target/dif.sh@147 -- # nvmftestfini 00:30:39.052 14:44:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:39.052 14:44:44 -- nvmf/common.sh@116 -- # sync 00:30:39.052 14:44:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:39.052 14:44:44 -- nvmf/common.sh@119 -- # set +e 00:30:39.052 14:44:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:39.052 14:44:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:39.052 rmmod nvme_tcp 00:30:39.052 rmmod nvme_fabrics 00:30:39.052 rmmod nvme_keyring 00:30:39.052 14:44:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:39.052 14:44:44 -- nvmf/common.sh@123 -- # set -e 00:30:39.052 14:44:44 -- nvmf/common.sh@124 -- # return 0 00:30:39.052 14:44:44 -- nvmf/common.sh@477 -- # '[' -n 91966 ']' 00:30:39.052 14:44:44 -- nvmf/common.sh@478 -- # killprocess 91966 00:30:39.052 14:44:44 -- common/autotest_common.sh@936 -- # '[' -z 91966 ']' 00:30:39.052 14:44:44 -- common/autotest_common.sh@940 -- # kill -0 91966 00:30:39.052 14:44:44 -- common/autotest_common.sh@941 -- # uname 00:30:39.052 14:44:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:39.052 14:44:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91966 00:30:39.052 14:44:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:39.052 killing process with pid 91966 00:30:39.052 14:44:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:39.052 14:44:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91966' 00:30:39.052 14:44:44 -- common/autotest_common.sh@955 -- # kill 91966 00:30:39.052 14:44:44 -- common/autotest_common.sh@960 -- # wait 91966 00:30:39.052 14:44:44 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:30:39.052 14:44:44 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:39.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:39.052 Waiting for block devices as requested 00:30:39.052 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:39.052 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:30:39.052 14:44:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:39.052 14:44:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:39.052 14:44:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:39.052 14:44:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:39.052 14:44:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.052 14:44:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.052 14:44:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.052 14:44:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:39.052 00:30:39.052 real 1m1.529s 00:30:39.052 user 4m0.143s 00:30:39.052 sys 0m14.136s 00:30:39.052 14:44:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:39.052 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:30:39.052 ************************************ 00:30:39.052 END TEST nvmf_dif 00:30:39.052 ************************************ 00:30:39.052 14:44:45 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:39.052 14:44:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:39.052 14:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:39.052 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:30:39.052 ************************************ 00:30:39.052 START TEST nvmf_abort_qd_sizes 00:30:39.052 ************************************ 00:30:39.052 14:44:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:39.052 * Looking for test storage... 00:30:39.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:39.052 14:44:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:39.052 14:44:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:39.052 14:44:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:39.052 14:44:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:39.052 14:44:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:39.052 14:44:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:39.052 14:44:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:39.052 14:44:45 -- scripts/common.sh@335 -- # IFS=.-: 00:30:39.052 14:44:45 -- scripts/common.sh@335 -- # read -ra ver1 00:30:39.052 14:44:45 -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.052 14:44:45 -- scripts/common.sh@336 -- # read -ra ver2 00:30:39.052 14:44:45 -- scripts/common.sh@337 -- # local 'op=<' 00:30:39.052 14:44:45 -- scripts/common.sh@339 -- # ver1_l=2 00:30:39.052 14:44:45 -- scripts/common.sh@340 -- # ver2_l=1 00:30:39.052 14:44:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:39.052 14:44:45 -- scripts/common.sh@343 -- # case "$op" in 00:30:39.052 14:44:45 -- scripts/common.sh@344 -- # : 1 00:30:39.052 14:44:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:39.052 14:44:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.052 14:44:45 -- scripts/common.sh@364 -- # decimal 1 00:30:39.052 14:44:45 -- scripts/common.sh@352 -- # local d=1 00:30:39.052 14:44:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.052 14:44:45 -- scripts/common.sh@354 -- # echo 1 00:30:39.052 14:44:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:39.052 14:44:45 -- scripts/common.sh@365 -- # decimal 2 00:30:39.052 14:44:45 -- scripts/common.sh@352 -- # local d=2 00:30:39.052 14:44:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.052 14:44:45 -- scripts/common.sh@354 -- # echo 2 00:30:39.052 14:44:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:39.052 14:44:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:39.052 14:44:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:39.052 14:44:45 -- scripts/common.sh@367 -- # return 0 00:30:39.052 14:44:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.052 14:44:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.052 --rc genhtml_branch_coverage=1 00:30:39.052 --rc genhtml_function_coverage=1 00:30:39.052 --rc genhtml_legend=1 00:30:39.052 --rc geninfo_all_blocks=1 00:30:39.052 --rc geninfo_unexecuted_blocks=1 00:30:39.052 00:30:39.052 ' 00:30:39.052 14:44:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.052 --rc genhtml_branch_coverage=1 00:30:39.052 --rc genhtml_function_coverage=1 00:30:39.052 --rc genhtml_legend=1 00:30:39.052 --rc geninfo_all_blocks=1 00:30:39.052 --rc geninfo_unexecuted_blocks=1 00:30:39.052 00:30:39.052 ' 00:30:39.052 14:44:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.052 --rc genhtml_branch_coverage=1 00:30:39.052 --rc genhtml_function_coverage=1 00:30:39.052 --rc genhtml_legend=1 00:30:39.052 --rc geninfo_all_blocks=1 00:30:39.052 --rc geninfo_unexecuted_blocks=1 00:30:39.052 00:30:39.052 ' 00:30:39.052 14:44:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:39.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.052 --rc genhtml_branch_coverage=1 00:30:39.052 --rc genhtml_function_coverage=1 00:30:39.052 --rc genhtml_legend=1 00:30:39.052 --rc geninfo_all_blocks=1 00:30:39.052 --rc geninfo_unexecuted_blocks=1 00:30:39.052 00:30:39.052 ' 00:30:39.052 14:44:45 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:39.052 14:44:45 -- nvmf/common.sh@7 -- # uname -s 00:30:39.052 14:44:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.052 14:44:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.052 14:44:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.052 14:44:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.052 14:44:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.052 14:44:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.052 14:44:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.052 14:44:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.052 14:44:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.052 14:44:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.052 14:44:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:30:39.052 14:44:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=f4dc61da-51d8-47f8-bc7f-592b2964f87d 00:30:39.052 14:44:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.052 14:44:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.052 14:44:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:39.052 14:44:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:39.052 14:44:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.052 14:44:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.052 14:44:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.052 14:44:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.052 14:44:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.052 14:44:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.052 14:44:45 -- paths/export.sh@5 -- # export PATH 00:30:39.052 14:44:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.052 14:44:45 -- nvmf/common.sh@46 -- # : 0 00:30:39.052 14:44:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:39.052 14:44:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:39.053 14:44:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:39.053 14:44:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.053 14:44:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.053 14:44:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:39.053 14:44:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:39.053 14:44:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:39.053 14:44:45 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:30:39.053 14:44:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:39.053 14:44:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.053 14:44:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:39.053 14:44:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:39.053 14:44:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:39.053 14:44:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.053 14:44:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.053 14:44:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.053 14:44:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:39.053 14:44:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:39.053 14:44:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:39.053 14:44:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:39.053 14:44:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:39.053 14:44:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:39.053 14:44:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.053 14:44:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.053 14:44:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:39.053 14:44:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:39.053 14:44:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:39.053 14:44:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:39.053 14:44:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:39.053 14:44:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.053 14:44:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:39.053 14:44:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:39.053 14:44:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:39.053 14:44:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:39.053 14:44:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:39.053 14:44:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:39.053 Cannot find device "nvmf_tgt_br" 00:30:39.053 14:44:45 -- nvmf/common.sh@154 -- # true 00:30:39.053 14:44:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:39.053 Cannot find device "nvmf_tgt_br2" 00:30:39.053 14:44:45 -- nvmf/common.sh@155 -- # true 00:30:39.053 14:44:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:39.053 14:44:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:39.053 Cannot find device "nvmf_tgt_br" 00:30:39.053 14:44:45 -- nvmf/common.sh@157 -- # true 00:30:39.053 14:44:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:39.053 Cannot find device "nvmf_tgt_br2" 00:30:39.053 14:44:45 -- nvmf/common.sh@158 -- # true 00:30:39.053 14:44:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:39.053 14:44:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:39.053 14:44:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:39.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:39.053 14:44:45 -- nvmf/common.sh@161 -- # true 00:30:39.053 14:44:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:39.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:39.053 14:44:45 -- nvmf/common.sh@162 -- # true 00:30:39.053 14:44:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:39.053 14:44:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:39.053 14:44:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:39.053 14:44:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:39.053 14:44:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:39.053 14:44:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:39.053 14:44:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:39.053 14:44:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:39.053 14:44:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:39.053 14:44:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:39.053 14:44:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:39.053 14:44:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:39.053 14:44:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:39.053 14:44:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:39.053 14:44:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:39.053 14:44:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:39.053 14:44:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:39.053 14:44:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:39.053 14:44:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:39.053 14:44:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:39.053 14:44:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:39.053 14:44:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:39.053 14:44:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:39.053 14:44:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:39.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:30:39.053 00:30:39.053 --- 10.0.0.2 ping statistics --- 00:30:39.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.053 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:39.053 14:44:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:39.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:39.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:30:39.053 00:30:39.053 --- 10.0.0.3 ping statistics --- 00:30:39.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.053 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:30:39.053 14:44:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:39.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:30:39.053 00:30:39.053 --- 10.0.0.1 ping statistics --- 00:30:39.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.053 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:30:39.053 14:44:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.053 14:44:45 -- nvmf/common.sh@421 -- # return 0 00:30:39.053 14:44:45 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:39.053 14:44:45 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:39.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:39.988 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:39.988 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:30:39.988 14:44:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.988 14:44:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:39.988 14:44:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:39.988 14:44:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.988 14:44:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:39.988 14:44:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:39.988 14:44:46 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:30:39.988 14:44:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:39.988 14:44:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:39.988 14:44:46 -- common/autotest_common.sh@10 -- # set +x 00:30:39.988 14:44:46 -- nvmf/common.sh@469 -- # nvmfpid=93331 00:30:39.988 14:44:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:39.988 14:44:46 -- nvmf/common.sh@470 -- # waitforlisten 93331 00:30:39.988 14:44:46 -- common/autotest_common.sh@829 -- # '[' -z 93331 ']' 00:30:39.988 14:44:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.988 14:44:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:39.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.988 14:44:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.988 14:44:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:39.988 14:44:46 -- common/autotest_common.sh@10 -- # set +x 00:30:39.988 [2024-12-06 14:44:46.934019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:39.988 [2024-12-06 14:44:46.934137] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.246 [2024-12-06 14:44:47.081939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.505 [2024-12-06 14:44:47.220232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:40.505 [2024-12-06 14:44:47.220461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.505 [2024-12-06 14:44:47.220487] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.505 [2024-12-06 14:44:47.220499] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.505 [2024-12-06 14:44:47.220656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.505 [2024-12-06 14:44:47.221309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.505 [2024-12-06 14:44:47.221480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.505 [2024-12-06 14:44:47.221490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.073 14:44:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:41.073 14:44:48 -- common/autotest_common.sh@862 -- # return 0 00:30:41.073 14:44:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:41.073 14:44:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:41.073 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.332 14:44:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.332 14:44:48 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:41.332 14:44:48 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:30:41.332 14:44:48 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:30:41.332 14:44:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:30:41.332 14:44:48 -- scripts/common.sh@312 -- # local nvmes 00:30:41.332 14:44:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:30:41.332 14:44:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:41.332 14:44:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:30:41.332 14:44:48 -- scripts/common.sh@297 -- # local bdf= 00:30:41.332 14:44:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:30:41.332 14:44:48 -- scripts/common.sh@232 -- # local class 00:30:41.332 14:44:48 -- scripts/common.sh@233 -- # local subclass 00:30:41.332 14:44:48 -- scripts/common.sh@234 -- # local progif 00:30:41.332 14:44:48 -- scripts/common.sh@235 -- # printf %02x 1 00:30:41.332 14:44:48 -- scripts/common.sh@235 -- # class=01 00:30:41.332 14:44:48 -- scripts/common.sh@236 -- # printf %02x 8 00:30:41.332 14:44:48 -- scripts/common.sh@236 -- # subclass=08 00:30:41.332 14:44:48 -- scripts/common.sh@237 -- # printf %02x 2 00:30:41.332 14:44:48 -- scripts/common.sh@237 -- # progif=02 00:30:41.332 14:44:48 -- scripts/common.sh@239 -- # hash lspci 00:30:41.332 14:44:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:30:41.332 14:44:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:30:41.332 14:44:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:30:41.332 14:44:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:41.332 14:44:48 -- scripts/common.sh@244 -- # tr -d '"' 00:30:41.332 14:44:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:41.332 14:44:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:30:41.332 14:44:48 -- scripts/common.sh@15 -- # local i 00:30:41.332 14:44:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:30:41.332 14:44:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:41.332 14:44:48 -- scripts/common.sh@24 -- # return 0 00:30:41.332 14:44:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:30:41.332 14:44:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:41.332 14:44:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:30:41.333 14:44:48 -- scripts/common.sh@15 -- # local i 00:30:41.333 14:44:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:30:41.333 14:44:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:41.333 14:44:48 -- scripts/common.sh@24 -- # return 0 00:30:41.333 14:44:48 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:30:41.333 14:44:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:30:41.333 14:44:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:30:41.333 14:44:48 -- scripts/common.sh@322 -- # uname -s 00:30:41.333 14:44:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:30:41.333 14:44:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:30:41.333 14:44:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:30:41.333 14:44:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:30:41.333 14:44:48 -- scripts/common.sh@322 -- # uname -s 00:30:41.333 14:44:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:30:41.333 14:44:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:30:41.333 14:44:48 -- scripts/common.sh@327 -- # (( 2 )) 00:30:41.333 14:44:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:30:41.333 14:44:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:41.333 14:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:41.333 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.333 ************************************ 00:30:41.333 START TEST spdk_target_abort 00:30:41.333 ************************************ 00:30:41.333 14:44:48 -- common/autotest_common.sh@1114 -- # spdk_target 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:30:41.333 14:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.333 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.333 spdk_targetn1 00:30:41.333 14:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.333 14:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.333 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.333 [2024-12-06 14:44:48.195306] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.333 14:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:30:41.333 14:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.333 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.333 14:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:30:41.333 14:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.333 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.333 14:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:30:41.333 14:44:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.333 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:41.333 [2024-12-06 14:44:48.231628] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.333 14:44:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:41.333 14:44:48 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:30:44.623 Initializing NVMe Controllers 00:30:44.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:30:44.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:30:44.623 Initialization complete. Launching workers. 00:30:44.623 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10084, failed: 0 00:30:44.623 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1203, failed to submit 8881 00:30:44.623 success 745, unsuccess 458, failed 0 00:30:44.623 14:44:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:44.623 14:44:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:30:47.934 Initializing NVMe Controllers 00:30:47.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:30:47.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:30:47.934 Initialization complete. Launching workers. 00:30:47.934 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5999, failed: 0 00:30:47.934 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1229, failed to submit 4770 00:30:47.934 success 246, unsuccess 983, failed 0 00:30:47.934 14:44:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:47.934 14:44:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:30:51.218 Initializing NVMe Controllers 00:30:51.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:30:51.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:30:51.218 Initialization complete. Launching workers. 00:30:51.218 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29042, failed: 0 00:30:51.218 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2652, failed to submit 26390 00:30:51.218 success 331, unsuccess 2321, failed 0 00:30:51.218 14:44:58 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:30:51.218 14:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.218 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:30:51.218 14:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.218 14:44:58 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:51.218 14:44:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.219 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:30:51.785 14:44:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.785 14:44:58 -- target/abort_qd_sizes.sh@62 -- # killprocess 93331 00:30:51.785 14:44:58 -- common/autotest_common.sh@936 -- # '[' -z 93331 ']' 00:30:51.785 14:44:58 -- common/autotest_common.sh@940 -- # kill -0 93331 00:30:51.785 14:44:58 -- common/autotest_common.sh@941 -- # uname 00:30:51.785 14:44:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:51.785 14:44:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93331 00:30:51.785 14:44:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:51.785 killing process with pid 93331 00:30:51.785 14:44:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:51.785 14:44:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93331' 00:30:51.785 14:44:58 -- common/autotest_common.sh@955 -- # kill 93331 00:30:51.785 14:44:58 -- common/autotest_common.sh@960 -- # wait 93331 00:30:52.043 00:30:52.043 real 0m10.753s 00:30:52.043 user 0m43.997s 00:30:52.043 sys 0m1.768s 00:30:52.043 14:44:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:52.043 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:30:52.043 ************************************ 00:30:52.043 END TEST spdk_target_abort 00:30:52.043 ************************************ 00:30:52.043 14:44:58 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:30:52.043 14:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:52.043 14:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:52.043 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:30:52.043 ************************************ 00:30:52.043 START TEST kernel_target_abort 00:30:52.043 ************************************ 00:30:52.043 14:44:58 -- common/autotest_common.sh@1114 -- # kernel_target 00:30:52.043 14:44:58 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:30:52.043 14:44:58 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:30:52.043 14:44:58 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:30:52.043 14:44:58 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:30:52.043 14:44:58 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:30:52.043 14:44:58 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:30:52.043 14:44:58 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:52.043 14:44:58 -- nvmf/common.sh@627 -- # local block nvme 00:30:52.043 14:44:58 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:30:52.043 14:44:58 -- nvmf/common.sh@630 -- # modprobe nvmet 00:30:52.043 14:44:58 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:52.043 14:44:58 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:52.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:52.579 Waiting for block devices as requested 00:30:52.579 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:52.579 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:30:52.579 14:44:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:30:52.579 14:44:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:52.579 14:44:59 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:30:52.579 14:44:59 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:30:52.579 14:44:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:52.838 No valid GPT data, bailing 00:30:52.838 14:44:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:52.838 14:44:59 -- scripts/common.sh@393 -- # pt= 00:30:52.838 14:44:59 -- scripts/common.sh@394 -- # return 1 00:30:52.838 14:44:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:30:52.838 14:44:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:30:52.838 14:44:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:52.838 14:44:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:30:52.838 14:44:59 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:30:52.838 14:44:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:52.838 No valid GPT data, bailing 00:30:52.838 14:44:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:52.838 14:44:59 -- scripts/common.sh@393 -- # pt= 00:30:52.838 14:44:59 -- scripts/common.sh@394 -- # return 1 00:30:52.838 14:44:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:30:52.838 14:44:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:30:52.838 14:44:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:30:52.838 14:44:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:30:52.838 14:44:59 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:30:52.838 14:44:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:30:52.838 No valid GPT data, bailing 00:30:52.838 14:44:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:30:52.838 14:44:59 -- scripts/common.sh@393 -- # pt= 00:30:52.838 14:44:59 -- scripts/common.sh@394 -- # return 1 00:30:52.838 14:44:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:30:52.838 14:44:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:30:52.838 14:44:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:30:52.838 14:44:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:30:52.838 14:44:59 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:30:52.838 14:44:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:30:53.098 No valid GPT data, bailing 00:30:53.098 14:44:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:30:53.098 14:44:59 -- scripts/common.sh@393 -- # pt= 00:30:53.098 14:44:59 -- scripts/common.sh@394 -- # return 1 00:30:53.098 14:44:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:30:53.098 14:44:59 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:30:53.098 14:44:59 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:30:53.098 14:44:59 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:30:53.098 14:44:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:53.098 14:44:59 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:30:53.098 14:44:59 -- nvmf/common.sh@654 -- # echo 1 00:30:53.098 14:44:59 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:30:53.098 14:44:59 -- nvmf/common.sh@656 -- # echo 1 00:30:53.098 14:44:59 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:30:53.098 14:44:59 -- nvmf/common.sh@663 -- # echo tcp 00:30:53.098 14:44:59 -- nvmf/common.sh@664 -- # echo 4420 00:30:53.098 14:44:59 -- nvmf/common.sh@665 -- # echo ipv4 00:30:53.098 14:44:59 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:53.098 14:44:59 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f4dc61da-51d8-47f8-bc7f-592b2964f87d --hostid=f4dc61da-51d8-47f8-bc7f-592b2964f87d -a 10.0.0.1 -t tcp -s 4420 00:30:53.098 00:30:53.098 Discovery Log Number of Records 2, Generation counter 2 00:30:53.098 =====Discovery Log Entry 0====== 00:30:53.098 trtype: tcp 00:30:53.098 adrfam: ipv4 00:30:53.098 subtype: current discovery subsystem 00:30:53.098 treq: not specified, sq flow control disable supported 00:30:53.098 portid: 1 00:30:53.098 trsvcid: 4420 00:30:53.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:53.098 traddr: 10.0.0.1 00:30:53.098 eflags: none 00:30:53.098 sectype: none 00:30:53.098 =====Discovery Log Entry 1====== 00:30:53.098 trtype: tcp 00:30:53.098 adrfam: ipv4 00:30:53.098 subtype: nvme subsystem 00:30:53.098 treq: not specified, sq flow control disable supported 00:30:53.098 portid: 1 00:30:53.098 trsvcid: 4420 00:30:53.098 subnqn: kernel_target 00:30:53.098 traddr: 10.0.0.1 00:30:53.098 eflags: none 00:30:53.098 sectype: none 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:53.098 14:44:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:30:56.387 Initializing NVMe Controllers 00:30:56.387 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:30:56.387 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:30:56.387 Initialization complete. Launching workers. 00:30:56.387 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 34670, failed: 0 00:30:56.387 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34670, failed to submit 0 00:30:56.387 success 0, unsuccess 34670, failed 0 00:30:56.387 14:45:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:56.387 14:45:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:30:59.670 Initializing NVMe Controllers 00:30:59.670 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:30:59.670 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:30:59.670 Initialization complete. Launching workers. 00:30:59.670 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 79151, failed: 0 00:30:59.670 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33744, failed to submit 45407 00:30:59.670 success 0, unsuccess 33744, failed 0 00:30:59.670 14:45:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:59.670 14:45:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:02.949 Initializing NVMe Controllers 00:31:02.949 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:02.949 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:02.949 Initialization complete. Launching workers. 00:31:02.949 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 102108, failed: 0 00:31:02.949 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25540, failed to submit 76568 00:31:02.949 success 0, unsuccess 25540, failed 0 00:31:02.949 14:45:09 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:31:02.949 14:45:09 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:31:02.949 14:45:09 -- nvmf/common.sh@677 -- # echo 0 00:31:02.949 14:45:09 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:31:02.949 14:45:09 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:02.949 14:45:09 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:02.949 14:45:09 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:31:02.949 14:45:09 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:31:02.949 14:45:09 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:31:02.949 ************************************ 00:31:02.949 END TEST kernel_target_abort 00:31:02.949 ************************************ 00:31:02.949 00:31:02.949 real 0m10.567s 00:31:02.949 user 0m5.507s 00:31:02.949 sys 0m2.241s 00:31:02.949 14:45:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:02.949 14:45:09 -- common/autotest_common.sh@10 -- # set +x 00:31:02.949 14:45:09 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:31:02.949 14:45:09 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:31:02.949 14:45:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:02.949 14:45:09 -- nvmf/common.sh@116 -- # sync 00:31:02.949 14:45:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:02.949 14:45:09 -- nvmf/common.sh@119 -- # set +e 00:31:02.949 14:45:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:02.949 14:45:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:02.949 rmmod nvme_tcp 00:31:02.949 rmmod nvme_fabrics 00:31:02.949 rmmod nvme_keyring 00:31:02.949 14:45:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:02.949 Process with pid 93331 is not found 00:31:02.949 14:45:09 -- nvmf/common.sh@123 -- # set -e 00:31:02.949 14:45:09 -- nvmf/common.sh@124 -- # return 0 00:31:02.949 14:45:09 -- nvmf/common.sh@477 -- # '[' -n 93331 ']' 00:31:02.949 14:45:09 -- nvmf/common.sh@478 -- # killprocess 93331 00:31:02.949 14:45:09 -- common/autotest_common.sh@936 -- # '[' -z 93331 ']' 00:31:02.949 14:45:09 -- common/autotest_common.sh@940 -- # kill -0 93331 00:31:02.949 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (93331) - No such process 00:31:02.949 14:45:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 93331 is not found' 00:31:02.949 14:45:09 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:02.949 14:45:09 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:03.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:03.513 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:31:03.513 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:31:03.513 14:45:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:03.513 14:45:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:03.513 14:45:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.513 14:45:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:03.513 14:45:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.513 14:45:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:03.513 14:45:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.513 14:45:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:03.513 00:31:03.513 real 0m25.101s 00:31:03.513 user 0m51.068s 00:31:03.513 sys 0m5.397s 00:31:03.513 ************************************ 00:31:03.513 END TEST nvmf_abort_qd_sizes 00:31:03.513 ************************************ 00:31:03.513 14:45:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:03.513 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:31:03.780 14:45:10 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:03.780 14:45:10 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:31:03.780 14:45:10 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:31:03.780 14:45:10 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:31:03.780 14:45:10 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:31:03.780 14:45:10 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:31:03.780 14:45:10 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:31:03.780 14:45:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:03.780 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:31:03.780 14:45:10 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:31:03.780 14:45:10 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:31:03.780 14:45:10 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:31:03.780 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:31:05.187 INFO: APP EXITING 00:31:05.187 INFO: killing all VMs 00:31:05.187 INFO: killing vhost app 00:31:05.187 INFO: EXIT DONE 00:31:06.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:06.121 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:31:06.121 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:31:06.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:06.686 Cleaning 00:31:06.686 Removing: /var/run/dpdk/spdk0/config 00:31:06.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:06.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:06.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:06.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:06.943 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:06.943 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:06.943 Removing: /var/run/dpdk/spdk1/config 00:31:06.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:06.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:06.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:06.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:06.944 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:06.944 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:06.944 Removing: /var/run/dpdk/spdk2/config 00:31:06.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:06.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:06.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:06.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:06.944 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:06.944 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:06.944 Removing: /var/run/dpdk/spdk3/config 00:31:06.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:06.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:06.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:06.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:06.944 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:06.944 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:06.944 Removing: /var/run/dpdk/spdk4/config 00:31:06.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:06.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:06.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:06.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:06.944 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:06.944 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:06.944 Removing: /dev/shm/nvmf_trace.0 00:31:06.944 Removing: /dev/shm/spdk_tgt_trace.pid55674 00:31:06.944 Removing: /var/run/dpdk/spdk0 00:31:06.944 Removing: /var/run/dpdk/spdk1 00:31:06.944 Removing: /var/run/dpdk/spdk2 00:31:06.944 Removing: /var/run/dpdk/spdk3 00:31:06.944 Removing: /var/run/dpdk/spdk4 00:31:06.944 Removing: /var/run/dpdk/spdk_pid55478 00:31:06.944 Removing: /var/run/dpdk/spdk_pid55674 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56020 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56307 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56514 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56604 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56709 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56822 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56860 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56901 00:31:06.944 Removing: /var/run/dpdk/spdk_pid56975 00:31:06.944 Removing: /var/run/dpdk/spdk_pid57104 00:31:06.944 Removing: /var/run/dpdk/spdk_pid57758 00:31:06.944 Removing: /var/run/dpdk/spdk_pid57822 00:31:06.944 Removing: /var/run/dpdk/spdk_pid57896 00:31:06.944 Removing: /var/run/dpdk/spdk_pid57924 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58009 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58037 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58146 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58174 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58231 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58261 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58318 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58348 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58507 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58543 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58624 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58694 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58724 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58782 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58806 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58842 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58867 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58900 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58921 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58961 00:31:06.944 Removing: /var/run/dpdk/spdk_pid58977 00:31:06.944 Removing: /var/run/dpdk/spdk_pid59017 00:31:06.944 Removing: /var/run/dpdk/spdk_pid59042 00:31:06.944 Removing: /var/run/dpdk/spdk_pid59071 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59096 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59139 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59159 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59193 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59218 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59253 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59272 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59312 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59334 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59373 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59388 00:31:07.201 Removing: /var/run/dpdk/spdk_pid59428 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59453 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59482 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59507 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59545 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59561 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59601 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59615 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59655 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59675 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59709 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59737 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59775 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59797 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59839 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59860 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59894 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59914 00:31:07.202 Removing: /var/run/dpdk/spdk_pid59955 00:31:07.202 Removing: /var/run/dpdk/spdk_pid60032 00:31:07.202 Removing: /var/run/dpdk/spdk_pid60159 00:31:07.202 Removing: /var/run/dpdk/spdk_pid60593 00:31:07.202 Removing: /var/run/dpdk/spdk_pid67571 00:31:07.202 Removing: /var/run/dpdk/spdk_pid67929 00:31:07.202 Removing: /var/run/dpdk/spdk_pid70327 00:31:07.202 Removing: /var/run/dpdk/spdk_pid70713 00:31:07.202 Removing: /var/run/dpdk/spdk_pid70988 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71028 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71305 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71307 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71371 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71429 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71484 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71522 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71535 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71556 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71600 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71602 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71662 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71720 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71786 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71824 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71826 00:31:07.202 Removing: /var/run/dpdk/spdk_pid71852 00:31:07.202 Removing: /var/run/dpdk/spdk_pid72147 00:31:07.202 Removing: /var/run/dpdk/spdk_pid72305 00:31:07.202 Removing: /var/run/dpdk/spdk_pid72571 00:31:07.202 Removing: /var/run/dpdk/spdk_pid72621 00:31:07.202 Removing: /var/run/dpdk/spdk_pid73009 00:31:07.202 Removing: /var/run/dpdk/spdk_pid73548 00:31:07.202 Removing: /var/run/dpdk/spdk_pid73991 00:31:07.202 Removing: /var/run/dpdk/spdk_pid74967 00:31:07.202 Removing: /var/run/dpdk/spdk_pid75975 00:31:07.202 Removing: /var/run/dpdk/spdk_pid76098 00:31:07.202 Removing: /var/run/dpdk/spdk_pid76166 00:31:07.202 Removing: /var/run/dpdk/spdk_pid77646 00:31:07.202 Removing: /var/run/dpdk/spdk_pid77894 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78355 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78460 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78616 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78657 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78703 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78748 00:31:07.202 Removing: /var/run/dpdk/spdk_pid78916 00:31:07.202 Removing: /var/run/dpdk/spdk_pid79063 00:31:07.202 Removing: /var/run/dpdk/spdk_pid79335 00:31:07.202 Removing: /var/run/dpdk/spdk_pid79458 00:31:07.202 Removing: /var/run/dpdk/spdk_pid79882 00:31:07.202 Removing: /var/run/dpdk/spdk_pid80273 00:31:07.460 Removing: /var/run/dpdk/spdk_pid80280 00:31:07.460 Removing: /var/run/dpdk/spdk_pid82529 00:31:07.460 Removing: /var/run/dpdk/spdk_pid82846 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83364 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83366 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83713 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83733 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83747 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83777 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83791 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83933 00:31:07.460 Removing: /var/run/dpdk/spdk_pid83946 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84049 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84051 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84159 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84161 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84644 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84688 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84846 00:31:07.460 Removing: /var/run/dpdk/spdk_pid84968 00:31:07.460 Removing: /var/run/dpdk/spdk_pid85371 00:31:07.460 Removing: /var/run/dpdk/spdk_pid85624 00:31:07.460 Removing: /var/run/dpdk/spdk_pid86129 00:31:07.460 Removing: /var/run/dpdk/spdk_pid86696 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87166 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87257 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87348 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87439 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87596 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87692 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87777 00:31:07.460 Removing: /var/run/dpdk/spdk_pid87867 00:31:07.460 Removing: /var/run/dpdk/spdk_pid88222 00:31:07.460 Removing: /var/run/dpdk/spdk_pid88928 00:31:07.460 Removing: /var/run/dpdk/spdk_pid90294 00:31:07.460 Removing: /var/run/dpdk/spdk_pid90502 00:31:07.460 Removing: /var/run/dpdk/spdk_pid90793 00:31:07.460 Removing: /var/run/dpdk/spdk_pid91100 00:31:07.460 Removing: /var/run/dpdk/spdk_pid91665 00:31:07.460 Removing: /var/run/dpdk/spdk_pid91670 00:31:07.460 Removing: /var/run/dpdk/spdk_pid92041 00:31:07.460 Removing: /var/run/dpdk/spdk_pid92197 00:31:07.460 Removing: /var/run/dpdk/spdk_pid92354 00:31:07.460 Removing: /var/run/dpdk/spdk_pid92452 00:31:07.460 Removing: /var/run/dpdk/spdk_pid92617 00:31:07.460 Removing: /var/run/dpdk/spdk_pid92726 00:31:07.460 Removing: /var/run/dpdk/spdk_pid93406 00:31:07.460 Removing: /var/run/dpdk/spdk_pid93441 00:31:07.460 Removing: /var/run/dpdk/spdk_pid93472 00:31:07.460 Removing: /var/run/dpdk/spdk_pid93724 00:31:07.460 Removing: /var/run/dpdk/spdk_pid93758 00:31:07.460 Removing: /var/run/dpdk/spdk_pid93792 00:31:07.460 Clean 00:31:07.460 killing process with pid 49771 00:31:07.717 killing process with pid 49774 00:31:07.717 14:45:14 -- common/autotest_common.sh@1446 -- # return 0 00:31:07.717 14:45:14 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:31:07.717 14:45:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:07.717 14:45:14 -- common/autotest_common.sh@10 -- # set +x 00:31:07.717 14:45:14 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:31:07.717 14:45:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:07.717 14:45:14 -- common/autotest_common.sh@10 -- # set +x 00:31:07.717 14:45:14 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:07.717 14:45:14 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:07.717 14:45:14 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:07.717 14:45:14 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:31:07.717 14:45:14 -- spdk/autotest.sh@383 -- # hostname 00:31:07.717 14:45:14 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:07.975 geninfo: WARNING: invalid characters removed from testname! 00:31:34.513 14:45:37 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:34.513 14:45:40 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:35.885 14:45:42 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:38.418 14:45:45 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:40.949 14:45:47 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:42.851 14:45:49 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:45.392 14:45:51 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:45.392 14:45:51 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:31:45.392 14:45:51 -- common/autotest_common.sh@1690 -- $ lcov --version 00:31:45.392 14:45:51 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:31:45.392 14:45:52 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:31:45.392 14:45:52 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:31:45.392 14:45:52 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:31:45.392 14:45:52 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:31:45.392 14:45:52 -- scripts/common.sh@335 -- $ IFS=.-: 00:31:45.392 14:45:52 -- scripts/common.sh@335 -- $ read -ra ver1 00:31:45.392 14:45:52 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:45.392 14:45:52 -- scripts/common.sh@336 -- $ read -ra ver2 00:31:45.392 14:45:52 -- scripts/common.sh@337 -- $ local 'op=<' 00:31:45.392 14:45:52 -- scripts/common.sh@339 -- $ ver1_l=2 00:31:45.392 14:45:52 -- scripts/common.sh@340 -- $ ver2_l=1 00:31:45.392 14:45:52 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:31:45.392 14:45:52 -- scripts/common.sh@343 -- $ case "$op" in 00:31:45.392 14:45:52 -- scripts/common.sh@344 -- $ : 1 00:31:45.392 14:45:52 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:31:45.392 14:45:52 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.392 14:45:52 -- scripts/common.sh@364 -- $ decimal 1 00:31:45.392 14:45:52 -- scripts/common.sh@352 -- $ local d=1 00:31:45.392 14:45:52 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:45.392 14:45:52 -- scripts/common.sh@354 -- $ echo 1 00:31:45.392 14:45:52 -- scripts/common.sh@364 -- $ ver1[v]=1 00:31:45.392 14:45:52 -- scripts/common.sh@365 -- $ decimal 2 00:31:45.392 14:45:52 -- scripts/common.sh@352 -- $ local d=2 00:31:45.392 14:45:52 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:45.392 14:45:52 -- scripts/common.sh@354 -- $ echo 2 00:31:45.392 14:45:52 -- scripts/common.sh@365 -- $ ver2[v]=2 00:31:45.392 14:45:52 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:31:45.392 14:45:52 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:31:45.392 14:45:52 -- scripts/common.sh@367 -- $ return 0 00:31:45.392 14:45:52 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.392 14:45:52 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:31:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.392 --rc genhtml_branch_coverage=1 00:31:45.392 --rc genhtml_function_coverage=1 00:31:45.392 --rc genhtml_legend=1 00:31:45.392 --rc geninfo_all_blocks=1 00:31:45.392 --rc geninfo_unexecuted_blocks=1 00:31:45.392 00:31:45.392 ' 00:31:45.392 14:45:52 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:31:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.392 --rc genhtml_branch_coverage=1 00:31:45.392 --rc genhtml_function_coverage=1 00:31:45.392 --rc genhtml_legend=1 00:31:45.392 --rc geninfo_all_blocks=1 00:31:45.392 --rc geninfo_unexecuted_blocks=1 00:31:45.392 00:31:45.392 ' 00:31:45.392 14:45:52 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:31:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.392 --rc genhtml_branch_coverage=1 00:31:45.392 --rc genhtml_function_coverage=1 00:31:45.392 --rc genhtml_legend=1 00:31:45.392 --rc geninfo_all_blocks=1 00:31:45.392 --rc geninfo_unexecuted_blocks=1 00:31:45.392 00:31:45.392 ' 00:31:45.392 14:45:52 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:31:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.392 --rc genhtml_branch_coverage=1 00:31:45.392 --rc genhtml_function_coverage=1 00:31:45.392 --rc genhtml_legend=1 00:31:45.392 --rc geninfo_all_blocks=1 00:31:45.392 --rc geninfo_unexecuted_blocks=1 00:31:45.392 00:31:45.392 ' 00:31:45.392 14:45:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:45.392 14:45:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:45.392 14:45:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.392 14:45:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.392 14:45:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.392 14:45:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.392 14:45:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.392 14:45:52 -- paths/export.sh@5 -- $ export PATH 00:31:45.392 14:45:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.392 14:45:52 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:45.392 14:45:52 -- common/autobuild_common.sh@440 -- $ date +%s 00:31:45.392 14:45:52 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733496352.XXXXXX 00:31:45.392 14:45:52 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733496352.mv6NAi 00:31:45.392 14:45:52 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:31:45.392 14:45:52 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:31:45.392 14:45:52 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:45.392 14:45:52 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:45.392 14:45:52 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:45.392 14:45:52 -- common/autobuild_common.sh@456 -- $ get_config_params 00:31:45.392 14:45:52 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:45.392 14:45:52 -- common/autotest_common.sh@10 -- $ set +x 00:31:45.392 14:45:52 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:31:45.392 14:45:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:45.392 14:45:52 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:45.392 14:45:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:45.392 14:45:52 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:45.392 14:45:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:45.392 14:45:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:45.392 14:45:52 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:45.392 14:45:52 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:45.392 14:45:52 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:45.392 14:45:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:45.392 + [[ -n 5238 ]] 00:31:45.392 + sudo kill 5238 00:31:45.402 [Pipeline] } 00:31:45.419 [Pipeline] // timeout 00:31:45.424 [Pipeline] } 00:31:45.439 [Pipeline] // stage 00:31:45.445 [Pipeline] } 00:31:45.460 [Pipeline] // catchError 00:31:45.470 [Pipeline] stage 00:31:45.472 [Pipeline] { (Stop VM) 00:31:45.485 [Pipeline] sh 00:31:45.768 + vagrant halt 00:31:49.065 ==> default: Halting domain... 00:31:55.663 [Pipeline] sh 00:31:55.943 + vagrant destroy -f 00:31:59.230 ==> default: Removing domain... 00:31:59.241 [Pipeline] sh 00:31:59.521 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:31:59.529 [Pipeline] } 00:31:59.544 [Pipeline] // stage 00:31:59.550 [Pipeline] } 00:31:59.564 [Pipeline] // dir 00:31:59.570 [Pipeline] } 00:31:59.584 [Pipeline] // wrap 00:31:59.591 [Pipeline] } 00:31:59.604 [Pipeline] // catchError 00:31:59.614 [Pipeline] stage 00:31:59.617 [Pipeline] { (Epilogue) 00:31:59.631 [Pipeline] sh 00:31:59.913 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:05.199 [Pipeline] catchError 00:32:05.201 [Pipeline] { 00:32:05.214 [Pipeline] sh 00:32:05.496 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:05.755 Artifacts sizes are good 00:32:05.765 [Pipeline] } 00:32:05.780 [Pipeline] // catchError 00:32:05.794 [Pipeline] archiveArtifacts 00:32:05.837 Archiving artifacts 00:32:05.979 [Pipeline] cleanWs 00:32:05.992 [WS-CLEANUP] Deleting project workspace... 00:32:05.993 [WS-CLEANUP] Deferred wipeout is used... 00:32:05.999 [WS-CLEANUP] done 00:32:06.001 [Pipeline] } 00:32:06.017 [Pipeline] // stage 00:32:06.022 [Pipeline] } 00:32:06.036 [Pipeline] // node 00:32:06.042 [Pipeline] End of Pipeline 00:32:06.092 Finished: SUCCESS